WorldWideScience

Sample records for network simulation techniques

  1. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    International Nuclear Information System (INIS)

    Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh

    2007-01-01

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  2. Network Simulation

    CERN Document Server

    Fujimoto, Richard

    2006-01-01

    "Network Simulation" presents a detailed introduction to the design, implementation, and use of network simulation tools. Discussion topics include the requirements and issues faced for simulator design and use in wired networks, wireless networks, distributed simulation environments, and fluid model abstractions. Several existing simulations are given as examples, with details regarding design decisions and why those decisions were made. Issues regarding performance and scalability are discussed in detail, describing how one can utilize distributed simulation methods to increase the

  3. Simulated Annealing Technique for Routing in a Rectangular Mesh Network

    Directory of Open Access Journals (Sweden)

    Noraziah Adzhar

    2014-01-01

    Full Text Available In the process of automatic design for printed circuit boards (PCBs, the phase following cell placement is routing. On the other hand, routing process is a notoriously difficult problem, and even the simplest routing problem which consists of a set of two-pin nets is known to be NP-complete. In this research, our routing region is first tessellated into a uniform Nx×Ny array of square cells. The ultimate goal for a routing problem is to achieve complete automatic routing with minimal need for any manual intervention. Therefore, shortest path for all connections needs to be established. While classical Dijkstra’s algorithm guarantees to find shortest path for a single net, each routed net will form obstacles for later paths. This will add complexities to route later nets and make its routing longer than the optimal path or sometimes impossible to complete. Today’s sequential routing often applies heuristic method to further refine the solution. Through this process, all nets will be rerouted in different order to improve the quality of routing. Because of this, we are motivated to apply simulated annealing, one of the metaheuristic methods to our routing model to produce better candidates of sequence.

  4. CAISSON: Interconnect Network Simulator

    Science.gov (United States)

    Springer, Paul L.

    2006-01-01

    Cray response to HPCS initiative. Model future petaflop computer interconnect. Parallel discrete event simulation techniques for large scale network simulation. Built on WarpIV engine. Run on laptop and Altix 3000. Can be sized up to 1000 simulated nodes per host node. Good parallel scaling characteristics. Flexible: multiple injectors, arbitration strategies, queue iterators, network topologies.

  5. Simulating GPS radio signal to synchronize network--a new technique for redundant timing.

    Science.gov (United States)

    Shan, Qingxiao; Jun, Yang; Le Floch, Jean-Michel; Fan, Yaohui; Ivanov, Eugene N; Tobar, Michael E

    2014-07-01

    Currently, many distributed systems such as 3G mobile communications and power systems are time synchronized with a Global Positioning System (GPS) signal. If there is a GPS failure, it is difficult to realize redundant timing, and thus time-synchronized devices may fail. In this work, we develop time transfer by simulating GPS signals, which promises no extra modification to original GPS-synchronized devices. This is achieved by applying a simplified GPS simulator for synchronization purposes only. Navigation data are calculated based on a pre-assigned time at a fixed position. Pseudo-range data which describes the distance change between the space vehicle (SV) and users are calculated. Because real-time simulation requires heavy-duty computations, we use self-developed software optimized on a PC to generate data, and save the data onto memory disks while the simulator is operating. The radio signal generation is similar to the SV at an initial position, and the frequency synthesis of the simulator is locked to a pre-assigned time. A filtering group technique is used to simulate the signal transmission delay corresponding to the SV displacement. Each SV generates a digital baseband signal, where a unique identifying code is added to the signal and up-converted to generate the output radio signal at the centered frequency of 1575.42 MHz (L1 band). A prototype with a field-programmable gate array (FPGA) has been built and experiments have been conducted to prove that we can realize time transfer. The prototype has been applied to the CDMA network for a three-month long experiment. Its precision has been verified and can meet the requirements of most telecommunication systems.

  6. Packet Tracer network simulator

    CERN Document Server

    Jesin, A

    2014-01-01

    A practical, fast-paced guide that gives you all the information you need to successfully create networks and simulate them using Packet Tracer.Packet Tracer Network Simulator is aimed at students, instructors, and network administrators who wish to use this simulator to learn how to perform networking instead of investing in expensive, specialized hardware. This book assumes that you have a good amount of Cisco networking knowledge, and it will focus more on Packet Tracer rather than networking.

  7. Simulating synchronization in neuronal networks

    Science.gov (United States)

    Fink, Christian G.

    2016-06-01

    We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.

  8. Message network simulation

    OpenAIRE

    Shih, Kuo-Tung

    1990-01-01

    Approved for public release, distribution is unlimited This thesis presents a computer simulation of a multinode data communication network using a virtual network model to determine the effects of various system parameters on overall network performance. Lieutenant Commander, Republic of China (Taiwan) Navy

  9. Airport Network Flow Simulator

    Science.gov (United States)

    1978-10-01

    The Airport Network Flow Simulator is a FORTRAN IV simulation of the flow of air traffic in the nation's 600 commercial airports. It calculates for any group of selected airports: (a) the landing and take-off (Type A) delays; and (b) the gate departu...

  10. Underwater Acoustic Networking Techniques

    CERN Document Server

    Otnes, Roald; Casari, Paolo; Goetz, Michael; Husøy, Thor; Nissen, Ivor; Rimstad, Knut; van Walree, Paul; Zorzi, Michele

    2012-01-01

    This literature study presents an overview of underwater acoustic networking. It provides a background and describes the state of the art of all networking facets that are relevant for underwater applications. This report serves both as an introduction to the subject and as a summary of existing protocols, providing support and inspiration for the development of network architectures.

  11. The application of neutral network integrated with genetic algorithm and simulated annealing for the simulation of rare earths separation processes by the solvent extraction technique using EHEHPA agent

    International Nuclear Information System (INIS)

    Tran Ngoc Ha; Pham Thi Hong Ha

    2003-01-01

    In the present work, neutral network has been used for mathematically modeling equilibrium data of the mixture of two rare earth elements, namely Nd and Pr with PC88A agent. Thermo-genetic algorithm based on the idea of the genetic algorithm and the simulated annealing algorithm have been used in the training procedure of the neutral networks, giving better result in comparison with the traditional modeling approach. The obtained neutral network modeling the experimental data is further used in the computer program to simulate the solvent extraction process of two elements Nd and Pr. Based on this computer program, various optional schemes for the separation of Nd and Pr have been investigated and proposed. (author)

  12. Airflow Simulation Techniques

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    The paper describes the development in airflow simulations in rooms . The research is, as other areas of flow research, influenced by the decreasing cost of computation which seems to indicate an increased use of airflow simulation in the coming years.......The paper describes the development in airflow simulations in rooms . The research is, as other areas of flow research, influenced by the decreasing cost of computation which seems to indicate an increased use of airflow simulation in the coming years....

  13. Graph Theory-Based Technique for Isolating Corrupted Boundary Conditions in Continental-Scale River Network Hydrodynamic Simulation

    Science.gov (United States)

    Yu, C. W.; Hodges, B. R.; Liu, F.

    2017-12-01

    Development of continental-scale river network models creates challenges where the massive amount of boundary condition data encounters the sensitivity of a dynamic nu- merical model. The topographic data sets used to define the river channel characteristics may include either corrupt data or complex configurations that cause instabilities in a numerical solution of the Saint-Venant equations. For local-scale river models (e.g. HEC- RAS), modelers typically rely on past experience to make ad hoc boundary condition adjustments that ensure a stable solution - the proof of the adjustment is merely the sta- bility of the solution. To date, there do not exist any formal methodologies or automated procedures for a priori detecting/fixing boundary conditions that cause instabilities in a dynamic model. Formal methodologies for data screening and adjustment are a critical need for simulations with a large number of river reaches that draw their boundary con- dition data from a wide variety of sources. At the continental scale, we simply cannot assume that we will have access to river-channel cross-section data that has been ade- quately analyzed and processed. Herein, we argue that problematic boundary condition data for unsteady dynamic modeling can be identified through numerical modeling with the steady-state Saint-Venant equations. The fragility of numerical stability increases with the complexity of branching in river network system and instabilities (even in an unsteady solution) are typically triggered by the nonlinear advection term in Saint-Venant equations. It follows that the behavior of the simpler steady-state equations (which retain the nonlin- ear term) can be used to screen the boundary condition data for problematic regions. In this research, we propose a graph-theory based method to isolate the location of corrupted boundary condition data in a continental-scale river network and demonstrate its utility with a network of O(10^4) elements. Acknowledgement

  14. Simulation Techniques That Work.

    Science.gov (United States)

    Beland, Robert M.

    1983-01-01

    At the University of Florida, simulated experiences with disabled clients help bridge the gap between coursework and internships for recreation therapy students. Actors from the university's drama department act out the roles of handicapped persons, who are interviewed by therapy students. (PP)

  15. Urban Road Traffic Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Ana Maria Nicoleta Mocofan

    2011-09-01

    Full Text Available For achieving a reliable traffic control system it is necessary to first establish a network parameter evaluation system and also a simulation system for the traffic lights plan. In 40 years of history, the computer aided traffic simulation has developed from a small research group to a large scale technology for traffic systems planning and development. In the following thesis, a presentation of the main modeling and simulation road traffic applications will be provided, along with their utility, as well as the practical application of one of the models in a case study.

  16. GNS3 network simulation guide

    CERN Document Server

    Welsh, Chris

    2013-01-01

    GNS3 Network Simulation Guide is an easy-to-follow yet comprehensive guide which is written in a tutorial format helping you grasp all the things you need for accomplishing your certification or simulation goal. If you are a networking professional who wants to learn how to simulate networks using GNS3, this book is ideal for you. The introductory examples within the book only require minimal networking knowledge, but as the book progresses onto more advanced topics, users will require knowledge of TCP/IP and routing.

  17. Bitcoin network simulator data explotation

    OpenAIRE

    Berini Sarrias, Martí

    2015-01-01

    This project starts with a brief introduction to the concepts of Bitcoin and blockchain, followed by the description of the di erent known attacks to the Bitcoin network. Once reached this point, the basic structure of the Bitcoin network simulator is presented. The main objective of this project is to help in the security assessment of the Bitcoin network. To accomplish that, we try to identify useful metrics, explain them and implement them in the corresponding simulator modules, aiming to ...

  18. Design Techniques and Reservoir Simulation

    Directory of Open Access Journals (Sweden)

    Ahad Fereidooni

    2012-11-01

    Full Text Available Enhanced oil recovery using nitrogen injection is a commonly applied method for pressure maintenance in conventional reservoirs. Numerical simulations can be practiced for the prediction of a reservoir performance in the course of injection process; however, a detailed simulation might take up enormous computer processing time. In such cases, a simple statistical model may be a good approach to the preliminary prediction of the process without any application of numerical simulation. In the current work, seven rock/fluid reservoir properties are considered as screening parameters and those parameters having the most considerable effect on the process are determined using the combination of experimental design techniques and reservoir simulations. Therefore, the statistical significance of the main effects and interactions of screening parameters are analyzed utilizing statistical inference approaches. Finally, the influential parameters are employed to create a simple statistical model which allows the preliminary prediction of nitrogen injection in terms of a recovery factor without resorting to numerical simulations.

  19. Physical simulations using centrifuge techniques

    International Nuclear Information System (INIS)

    Sutherland, H.J.

    1981-01-01

    Centrifuge techniques offer a technique for doing physical simulations of the long-term mechanical response of deep ocean sediment to the emplacement of waste canisters and to the temperature gradients generated by them. Preliminary investigations of the scaling laws for pertinent phenomena indicate that the time scaling will be consistent among them and equal to the scaling factor squared. This result implies that this technique will permit accelerated-life-testing of proposed configurations; i.e, long-term studies may be done in relatively short times. Presently, existing centrifuges are being modified to permit scale model testing. This testing will start next year

  20. Multilevel techniques for Reservoir Simulation

    DEFF Research Database (Denmark)

    Christensen, Max la Cour

    The subject of this thesis is the development, application and study of novel multilevel methods for the acceleration and improvement of reservoir simulation techniques. The motivation for addressing this topic is a need for more accurate predictions of porous media flow and the ability to carry...... Full Approximation Scheme) • Variational (Galerkin) upscaling • Linear solvers and preconditioners First, a nonlinear multigrid scheme in the form of the Full Approximation Scheme (FAS) is implemented and studied for a 3D three-phase compressible rock/fluids immiscible reservoir simulator...... is extended to include a hybrid strategy, where FAS is combined with Newton’s method to construct a multilevel nonlinear preconditioner. This method demonstrates high efficiency and robustness. Second, an improved IMPES formulated reservoir simulator is implemented using a novel variational upscaling approach...

  1. Wireless network simulation - Your window on future network performance

    NARCIS (Netherlands)

    Fledderus, E.

    2005-01-01

    The paper describes three relevant perspectives on current wireless simulation practices. In order to obtain the key challenges for future network simulations, the characteristics of "beyond 3G" networks are described, including their impact on simulation.

  2. Emerging wireless networks concepts, techniques and applications

    CERN Document Server

    Makaya, Christian

    2011-01-01

    An authoritative collection of research papers and surveys, Emerging Wireless Networks: Concepts, Techniques, and Applications explores recent developments in next-generation wireless networks (NGWNs) and mobile broadband networks technologies, including 4G (LTE, WiMAX), 3G (UMTS, HSPA), WiFi, mobile ad hoc networks, mesh networks, and wireless sensor networks. Focusing on improving the performance of wireless networks and provisioning better quality of service and quality of experience for users, it reports on the standards of different emerging wireless networks, applications, and service fr

  3. Testing philosophy and simulation techniques

    International Nuclear Information System (INIS)

    Holtbecker, H.

    1977-01-01

    This paper reviews past and present testing philosophies and simulation techniques in the field of structure loading and response studies. The main objective of experimental programmes in the past was to simulate a hypothetical energy release with explosives and to deduce the potential damage to a reactor from the measured damage to the model. This approach was continuously refined by improving the instrumentation of the models, by reproducing the structures as faithful as possible and by developing new explosive charges. This paper presents an analysis of the factors which are expected to have an influence on the validity of the results e.g. strain rate effects and the use of water instead of sodium. More recently the discussion of a whole series of accidents in the probabilistic accident analysis and the intention to compare different reactor designs has revealed the need to develop and validate computer codes. Consequently experimental programmes have been started in which the primary aim is not to test a specific reactor but to validate codes. This paper shows the principal aspects of this approach and discusses first results. (Auth.)

  4. NEW TECHNIQUES APPLIED IN ECONOMICS. ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Constantin Ilie

    2009-05-01

    Full Text Available The present paper has the objective to inform the public regarding the use of new techniques for the modeling, simulate and forecast of system from different field of activity. One of those techniques is Artificial Neural Network, one of the artificial in

  5. Visual air quality simulation techniques

    Science.gov (United States)

    Molenar, John V.; Malm, William C.; Johnson, Christopher E.

    Visual air quality is primarily a human perceptual phenomenon beginning with the transfer of image-forming information through an illuminated, scattering and absorbing atmosphere. Visibility, especially the visual appearance of industrial emissions or the degradation of a scenic view, is the principal atmospheric characteristic through which humans perceive air pollution, and is more sensitive to changing pollution levels than any other air pollution effect. Every attempt to quantify economic costs and benefits of air pollution has indicated that good visibility is a highly valued and desired environmental condition. Measurement programs can at best approximate the state of the ambient atmosphere at a few points in a scenic vista viewed by an observer. To fully understand the visual effect of various changes in the concentration and distribution of optically important atmospheric pollutants requires the use of aerosol and radiative transfer models. Communication of the output of these models to scientists, decision makers and the public is best done by applying modern image-processing systems to generate synthetic images representing the modeled air quality conditions. This combination of modeling techniques has been under development for the past 15 yr. Initially, visual air quality simulations were limited by a lack of computational power to simplified models depicting Gaussian plumes or uniform haze conditions. Recent explosive growth in low cost, high powered computer technology has allowed the development of sophisticated aerosol and radiative transfer models that incorporate realistic terrain, multiple scattering, non-uniform illumination, varying spatial distribution, concentration and optical properties of atmospheric constituents, and relative humidity effects on aerosol scattering properties. This paper discusses these improved models and image-processing techniques in detail. Results addressing uniform and non-uniform layered haze conditions in both

  6. Blockmodeling techniques for complex networks

    Science.gov (United States)

    Ball, Brian Joseph

    The class of network models known as stochastic blockmodels has recently been gaining popularity. In this dissertation, we present new work that uses blockmodels to answer questions about networks. We create a blockmodel based on the idea of link communities, which naturally gives rise to overlapping vertex communities. We derive a fast and accurate algorithm to fit the model to networks. This model can be related to another blockmodel, which allows the method to efficiently find nonoverlapping communities as well. We then create a heuristic based on the link community model whose use is to find the correct number of communities in a network. The heuristic is based on intuitive corrections to likelihood ratio tests. It does a good job finding the correct number of communities in both real networks and synthetic networks generated from the link communities model. Two commonly studied types of networks are citation networks, where research papers cite other papers, and coauthorship networks, where authors are connected if they've written a paper together. We study a multi-modal network from a large dataset of Physics publications that is the combination of the two, allowing for directed links between papers as citations, and an undirected edge between a scientist and a paper if they helped to write it. This allows for new insights on the relation between social interaction and scientific production. We also have the publication dates of papers, which lets us track our measures over time. Finally, we create a stochastic model for ranking vertices in a semi-directed network. The probability of connection between two vertices depends on the difference of their ranks. When this model is fit to high school friendship networks, the ranks appear to correspond with a measure of social status. Students have reciprocated and some unreciprocated edges with other students of closely similar rank that correspond to true friendship, and claim an aspirational friendship with a much

  7. Synchronization Techniques in Parallel Discrete Event Simulation

    OpenAIRE

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  8. Developed hydraulic simulation model for water pipeline networks

    Directory of Open Access Journals (Sweden)

    A. Ayad

    2013-03-01

    Full Text Available A numerical method that uses linear graph theory is presented for both steady state, and extended period simulation in a pipe network including its hydraulic components (pumps, valves, junctions, etc.. The developed model is based on the Extended Linear Graph Theory (ELGT technique. This technique is modified to include new network components such as flow control valves and tanks. The technique also expanded for extended period simulation (EPS. A newly modified method for the calculation of updated flows improving the convergence rate is being introduced. Both benchmarks, ad Actual networks are analyzed to check the reliability of the proposed method. The results reveal the finer performance of the proposed method.

  9. Real-time network traffic classification technique for wireless local area networks based on compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza

    2017-05-01

    Network traffic or data traffic in a Wireless Local Area Network (WLAN) is the amount of network packets moving across a wireless network from each wireless node to another wireless node, which provide the load of sampling in a wireless network. WLAN's Network traffic is the main component for network traffic measurement, network traffic control and simulation. Traffic classification technique is an essential tool for improving the Quality of Service (QoS) in different wireless networks in the complex applications such as local area networks, wireless local area networks, wireless personal area networks, wireless metropolitan area networks, and wide area networks. Network traffic classification is also an essential component in the products for QoS control in different wireless network systems and applications. Classifying network traffic in a WLAN allows to see what kinds of traffic we have in each part of the network, organize the various kinds of network traffic in each path into different classes in each path, and generate network traffic matrix in order to Identify and organize network traffic which is an important key for improving the QoS feature. To achieve effective network traffic classification, Real-time Network Traffic Classification (RNTC) algorithm for WLANs based on Compressed Sensing (CS) is presented in this paper. The fundamental goal of this algorithm is to solve difficult wireless network management problems. The proposed architecture allows reducing False Detection Rate (FDR) to 25% and Packet Delay (PD) to 15 %. The proposed architecture is also increased 10 % accuracy of wireless transmission, which provides a good background for establishing high quality wireless local area networks.

  10. Modeling, validation, and simulation of massive self-organizing wireless sensor networks with cross-layer optimization and congestion mitigation techniques

    NARCIS (Netherlands)

    Boltjes, B.; Oever, J. van den; Zhang, S.

    2008-01-01

    TNO has formulated the ambition of founding a basis for the development of flexible multi-data source and multi-application (ad hoc) sensor networks. These networks are envisioned on a scale that is beyond networks for specific and separate sensor networks. These separate networks need in the future

  11. Neural network scatter correction technique for digital radiography

    International Nuclear Information System (INIS)

    Boone, J.M.

    1990-01-01

    This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique

  12. WDM Systems and Networks Modeling, Simulation, Design and Engineering

    CERN Document Server

    Ellinas, Georgios; Roudas, Ioannis

    2012-01-01

    WDM Systems and Networks: Modeling, Simulation, Design and Engineering provides readers with the basic skills, concepts, and design techniques used to begin design and engineering of optical communication systems and networks at various layers. The latest semi-analytical system simulation techniques are applied to optical WDM systems and networks, and a review of the various current areas of optical communications is presented. Simulation is mixed with experimental verification and engineering to present the industry as well as state-of-the-art research. This contributed volume is divided into three parts, accommodating different readers interested in various types of networks and applications. The first part of the book presents modeling approaches and simulation tools mainly for the physical layer including transmission effects, devices, subsystems, and systems), whereas the second part features more engineering/design issues for various types of optical systems including ULH, access, and in-building system...

  13. Performance Monitoring Techniques Supporting Cognitive Optical Networking

    DEFF Research Database (Denmark)

    Caballero Jambrina, Antonio; Borkowski, Robert; Zibar, Darko

    2013-01-01

    High degree of heterogeneity of future optical networks, such as services with different quality-of-transmission requirements, modulation formats and switching techniques, will pose a challenge for the control and optimization of different parameters. Incorporation of cognitive techniques can help...... to solve this issue by realizing a network that can observe, act, learn and optimize its performance, taking into account end-to-end goals. In this letter we present the approach of cognition applied to heterogeneous optical networks developed in the framework of the EU project CHRON: Cognitive...... Heterogeneous Reconfigurable Optical Network. We focus on the approaches developed in the project for optical performance monitoring, which enable the feedback from the physical layer to the cognitive decision system by providing accurate description of the performance of the established lightpaths....

  14. Simulation-based optimization parametric optimization techniques and reinforcement learning

    CERN Document Server

    Gosavi, Abhijit

    2003-01-01

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

  15. Introduction to Network Simulator NS2

    CERN Document Server

    Issariyakul, Teerawat

    2012-01-01

    "Introduction to Network Simulator NS2" is a primer providing materials for NS2 beginners, whether students, professors, or researchers for understanding the architecture of Network Simulator 2 (NS2) and for incorporating simulation modules into NS2. The authors discuss the simulation architecture and the key components of NS2 including simulation-related objects, network objects, packet-related objects, and helper objects. The NS2 modules included within are nodes, links, SimpleLink objects, packets, agents, and applications. Further, the book covers three helper modules: timers, ra

  16. Efficient Neural Network Modeling for Flight and Space Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Ayman Hamdy Kassem

    2011-01-01

    Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.

  17. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  18. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  19. Environmental pollutants monitoring network using nuclear techniques

    International Nuclear Information System (INIS)

    Cohen, D.D.

    1994-01-01

    The Australian Nuclear Science and Technology Organisation (ANSTO) in collaboration with the NSW Environment Protection Authority (EPA), Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 60,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 μm particle diameter cut off and samples for 24 hours using a stretched Teflon filter for each day. Accelerator-based Ion Beam Analysis(IBA) techniques are well suited to analyse the thousands of filter papers a year that originate from such a large scale aerosol sampling network. These techniques are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on a 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. This paper described the four simultaneous accelerator based IBA techniques used at ANSTO, to analyse for the following 24 elements H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. Each analysis requires only a few minutes of accelerator running time to complete. 15 refs., 9 figs

  20. Simulations of biopolymer networks under shear

    NARCIS (Netherlands)

    Huisman, Elisabeth Margaretha

    2011-01-01

    In this thesis we present a new method to simulate realistic three-dimensional networks of biopolymers under shear. These biopolymer networks are important for the structural functions of cells and tissues. We use the method to analyze these networks under shear, and consider the elastic modulus,

  1. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  2. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  3. Splitting Strategy for Simulating Genetic Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Xiong You

    2014-01-01

    Full Text Available The splitting approach is developed for the numerical simulation of genetic regulatory networks with a stable steady-state structure. The numerical results of the simulation of a one-gene network, a two-gene network, and a p53-mdm2 network show that the new splitting methods constructed in this paper are remarkably more effective and more suitable for long-term computation with large steps than the traditional general-purpose Runge-Kutta methods. The new methods have no restriction on the choice of stepsize due to their infinitely large stability regions.

  4. Optical supervised filtering technique based on Hopfield neural network

    Science.gov (United States)

    Bal, Abdullah

    2004-11-01

    Hopfield neural network is commonly preferred for optimization problems. In image segmentation, conventional Hopfield neural networks (HNN) are formulated as a cost-function-minimization problem to perform gray level thresholding on the image histogram or the pixels' gray levels arranged in a one-dimensional array [R. Sammouda, N. Niki, H. Nishitani, Pattern Rec. 30 (1997) 921-927; K.S. Cheng, J.S. Lin, C.W. Mao, IEEE Trans. Med. Imag. 15 (1996) 560-567; C. Chang, P. Chung, Image and Vision comp. 19 (2001) 669-678]. In this paper, a new high speed supervised filtering technique is proposed for image feature extraction and enhancement problems by modifying the conventional HNN. The essential improvement in this technique is to use 2D convolution operation instead of weight-matrix multiplication. Thereby, neural network based a new filtering technique has been obtained that is required just 3 × 3 sized filter mask matrix instead of large size weight coefficient matrix. Optical implementation of the proposed filtering technique is executed easily using the joint transform correlator. The requirement of non-negative data for optical implementation is provided by bias technique to convert the bipolar data to non-negative data. Simulation results of the proposed optical supervised filtering technique are reported for various feature extraction problems such as edge detection, corner detection, horizontal and vertical line extraction, and fingerprint enhancement.

  5. Modified network simulation model with token method of bus access

    Directory of Open Access Journals (Sweden)

    L.V. Stribulevich

    2013-08-01

    Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.

  6. Identification of b-jets with a low pΤ muon using ATLAS Tile Calorimeter simulation data and artificial neural networks technique

    International Nuclear Information System (INIS)

    Astvatsaturov, A.; Budagov, Yu.; Shigaev, V.; Nessi, M.; Pantea, D.

    1996-01-01

    The possibility to enhance the capability of ATLAS Tile Calorimeter to identify low p Τ muons (2 Τ Τ =20 and 40 GeV/c in the central region 0 b g is 4-10 times higher in NND case compared to LTD. The results obtained are based on 2000 jets simulated with the use of ATLAS simulation programs. 8 refs., 13 figs., 2 tabs

  7. Network Modeling and Simulation A Practical Perspective

    CERN Document Server

    Guizani, Mohsen; Khan, Bilal

    2010-01-01

    Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate

  8. Interfacing Network Simulations and Empirical Data

    Science.gov (United States)

    2009-05-01

    contraceptive innovations in the Cameroon. He found that real-world adoption rates did not follow simulation models when the network relationships were...Analysis of the Coevolution of Adolescents ’ Friendship Networks, Taste in Music, and Alcohol Consumption. Methodology, 2: 48-56. Tichy, N.M., Tushman

  9. Improving a Computer Networks Course Using the Partov Simulation Engine

    Science.gov (United States)

    Momeni, B.; Kharrazi, M.

    2012-01-01

    Computer networks courses are hard to teach as there are many details in the protocols and techniques involved that are difficult to grasp. Employing programming assignments as part of the course helps students to obtain a better understanding and gain further insight into the theoretical lectures. In this paper, the Partov simulation engine and…

  10. Stochastic Simulation of Biomolecular Reaction Networks Using the Biomolecular Network Simulator Software

    National Research Council Canada - National Science Library

    Frazier, John; Chusak, Yaroslav; Foy, Brent

    2008-01-01

    .... The software uses either exact or approximate stochastic simulation algorithms for generating Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks...

  11. A STUDY ON NETWORK SECURITY TECHNIQUES

    OpenAIRE

    Dr.T.Hemalatha; Dr.G.Rashita Banu; Dr.Murtaza Ali

    2016-01-01

    Internet plays a vital role in our day today life. Data security in web application has become very crucial. The usage of internet becomes more and more in recent years. Through internet the information’s can be shared through many social networks like Facebook, twitter, LinkedIn, blogs etc. There is chance of hacking the data while sharing from one to one. To prevent the data being hacked there are so many techniques such as Digital Signature, Cryptography, Digital watermarking, Data Sanit...

  12. Network simulations of optical illusions

    Science.gov (United States)

    Shinbrot, Troy; Lazo, Miguel Vivar; Siu, Theo

    We examine a dynamical network model of visual processing that reproduces several aspects of a well-known optical illusion, including subtle dependencies on curvature and scale. The model uses a genetic algorithm to construct the percept of an image, and we show that this percept evolves dynamically so as to produce the illusions reported. We find that the perceived illusions are hardwired into the model architecture and we propose that this approach may serve as an archetype to distinguish behaviors that are due to nature (i.e. a fixed network architecture) from those subject to nurture (that can be plastically altered through learning).

  13. Hierarchical Network Design Using Simulated Annealing

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Clausen, Jens

    2002-01-01

    networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....

  14. Power Minimization techniques for Networked Data Centers

    International Nuclear Information System (INIS)

    Low, Steven; Tang, Kevin

    2011-01-01

    Our objective is to develop a mathematical model to optimize energy consumption at multiple levels in networked data centers, and develop abstract algorithms to optimize not only individual servers, but also coordinate the energy consumption of clusters of servers within a data center and across geographically distributed data centers to minimize the overall energy cost and consumption of brown energy of an enterprise. In this project, we have formulated a variety of optimization models, some stochastic others deterministic, and have obtained a variety of qualitative results on the structural properties, robustness, and scalability of the optimal policies. We have also systematically derived from these models decentralized algorithms to optimize energy efficiency, analyzed their optimality and stability properties. Finally, we have conducted preliminary numerical simulations to illustrate the behavior of these algorithms. We draw the following conclusion. First, there is a substantial opportunity to minimize both the amount and the cost of electricity consumption in a network of datacenters, by exploiting the fact that traffic load, electricity cost, and availability of renewable generation fluctuate over time and across geographical locations. Judiciously matching these stochastic processes can optimize the tradeoff between brown energy consumption, electricity cost, and response time. Second, given the stochastic nature of these three processes, real-time dynamic feedback should form the core of any optimization strategy. The key is to develop decentralized algorithms that can be implemented at different parts of the network as simple, local algorithms that coordinate through asynchronous message passing.

  15. Network Simulation of Technical Architecture

    National Research Council Canada - National Science Library

    Cave, William

    1998-01-01

    ..., and development of the Army Battle Command System (ABCS). PSI delivered a hierarchical iconic modeling facility that can be used to structure and restructure both models and scenarios, interactively, while simulations are running...

  16. Fast simulation techniques for switching converters

    Science.gov (United States)

    King, Roger J.

    1987-01-01

    Techniques for simulating a switching converter are examined. The state equations for the equivalent circuits, which represent the switching converter, are presented and explained. The uses of the Newton-Raphson iteration, low ripple approximation, half-cycle symmetry, and discrete time equations to compute the interval durations are described. An example is presented in which these methods are illustrated by applying them to a parallel-loaded resonant inverter with three equivalent circuits for its continuous mode of operation.

  17. Reprocessing process simulation network; PRONET

    International Nuclear Information System (INIS)

    Mitsui, T.; Takada, H.; Kamishima, N.; Tsukamoto, T.; Harada, N.; Fujita, N.; Gonda, K.

    1991-01-01

    The effectiveness of simulation technology and its wide application to nuclear fuel reprocessing plants has been recognized recently. The principal aim of applying simulation is to predict the process behavior accurately based on the quantitative relations among substances in physical and chemical phenomena. Mitsubishi Heavy Industries Ltd. has engaged positively in the development and the application study of this technology. All the software products of its recent activities were summarized in the integrated form named 'PRONET'. The PRONET is classified into two independent software groups from the viewpoint of computer system. One is off-line Process Simulation Group, and the other is Dynamic Real-time Simulator Group. The former is called 'PRONET System', and the latter is called 'PRONET Simulator'. These have several subsystems with the prefix 'MR' meaning Mitsubishi Reprocessing Plant. Each MR subsystem is explained in this report. The technical background, the objective of the PRONET, the system and the function of the PRONET, and the future application to an on-line real-time simulator and the development of MR EXPERT are described. (K.I.)

  18. Power Aware Simulation Framework for Wireless Sensor Networks and Nodes

    Directory of Open Access Journals (Sweden)

    Daniel Weber

    2008-07-01

    Full Text Available The constrained resources of sensor nodes limit analytical techniques and cost-time factors limit test beds to study wireless sensor networks (WSNs. Consequently, simulation becomes an essential tool to evaluate such systems.We present the power aware wireless sensors (PAWiS simulation framework that supports design and simulation of wireless sensor networks and nodes. The framework emphasizes power consumption capturing and hence the identification of inefficiencies in various hardware and software modules of the systems. These modules include all layers of the communication system, the targeted class of application itself, the power supply and energy management, the central processing unit (CPU, and the sensor-actuator interface. The modular design makes it possible to simulate heterogeneous systems. PAWiS is an OMNeT++ based discrete event simulator written in C++. It captures the node internals (modules as well as the node surroundings (network, environment and provides specific features critical to WSNs like capturing power consumption at various levels of granularity, support for mobility, and environmental dynamics as well as the simulation of timing effects. A module library with standardized interfaces and a power analysis tool have been developed to support the design and analysis of simulation models. The performance of the PAWiS simulator is comparable with other simulation environments.

  19. Visualization needs and techniques for astrophysical simulations

    International Nuclear Information System (INIS)

    Kapferer, W; Riser, T

    2008-01-01

    Numerical simulations have evolved continuously towards being an important field in astrophysics, equivalent to theory and observation. Due to the enormous developments in computer sciences, both hardware- and software-architecture, state-of-the-art simulations produce huge amounts of raw data with increasing complexity. In this paper some aspects of problems in the field of visualization in numerical astrophysics in combination with possible solutions are given. Commonly used visualization packages along with a newly developed approach to real-time visualization, incorporating shader programming to uncover the computational power of modern graphics cards, are presented. With these techniques at hand, real-time visualizations help scientists to understand the coherences in the results of their numerical simulations. Furthermore a fundamental problem in data analysis, i.e. coverage of metadata on how a visualization was created, is highlighted.

  20. Meeting the memory challenges of brain-scale network simulation

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2012-01-01

    Full Text Available The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10^5 neurons with up to 10^9 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are one or two orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been studied in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Bluegene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of a neuronal simulator as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place.

  1. Implementation of quantum key distribution network simulation module in the network simulator NS-3

    Science.gov (United States)

    Mehic, Miralem; Maurhart, Oliver; Rass, Stefan; Voznak, Miroslav

    2017-10-01

    As the research in quantum key distribution (QKD) technology grows larger and becomes more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. Due to the specificity of the QKD link which requires optical and Internet connection between the network nodes, to deploy a complete testbed containing multiple network hosts and links to validate and verify a certain network algorithm or protocol would be very costly. Network simulators in these circumstances save vast amounts of money and time in accomplishing such a task. The simulation environment offers the creation of complex network topologies, a high degree of control and repeatable experiments, which in turn allows researchers to conduct experiments and confirm their results. In this paper, we described the design of the QKD network simulation module which was developed in the network simulator of version 3 (NS-3). The module supports simulation of the QKD network in an overlay mode or in a single TCP/IP mode. Therefore, it can be used to simulate other network technologies regardless of QKD.

  2. Coarse-grain bandwidth estimation techniques for large-scale network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, E.

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  3. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  4. Stochastic sensitivity analysis and Langevin simulation for neural network learning

    International Nuclear Information System (INIS)

    Koda, Masato

    1997-01-01

    A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method

  5. The Airport Network Flow Simulator.

    Science.gov (United States)

    1976-05-01

    The impact of investment at an individual airport is felt through-out the National Airport System by reduction of delays at other airports in the the system. A GPSS model was constructed to simulate the propagation of delays through a nine-airport sy...

  6. Reliability analysis using network simulation

    International Nuclear Information System (INIS)

    1984-01-01

    A computer code that uses a dynamic, Monte Carlo modeling approach is Q-GERT (Graphical Evaluation and Review Technique--with Queueing), and the present study has demonstrated the feasibility of using Q-GERT for modeling time-dependent, unconditionally and conditionally linked phenomena that are characterized by arbitrarily selected probability distributions

  7. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Tree Simulation Techniques for Integrated Safety Assessment

    International Nuclear Information System (INIS)

    Melendez Asensio, E.; Izquierdo Rocha, J.M.; Sanchez Perez, M.; Hortal Reymundo, J.; Perez Mulas, A.

    1999-01-01

    techniques are: (a) An unifying theory that should (i) establish the relationship among different approaches and, in particular, be able to demonstrate the standard safety assessment approach as a particular case, (ii) identify implicit assumptions in present practice and (iii) establish a sound scientific reference for an ideal treatment in order to judge the relative importance of implicit and explicit assumptions. In addition, the theoretical developments help to identify the type of applications where the new developments will be a necessary requirement. (b) The capability for simulation of trees. By this we mean the techniques required to be able to efficiently simulate all branches. Historically algorithms able to do this were already implemented in earlier pioneering work for discrete number of branches while stochastic branching requires Montecarlo techniques. (c) The capability to incorporate new types of branching, particularly operator actions. This paper shortly reviews these aspects and justifies in that frame our particular development, denoted here as Integrated Safety Assessment methodology. In this method, the dynamics of the event is followed by transient simulation in tree form, building a Setpoint or Deterministic Dynamic Event Tree (DDET). When a setpoint that should trigger the actuation of a protection is crossed, the tree is opened in branches corresponding to different functioning states of the protection device and each branch followed by the engineering simulator. One of these states is the nominal state, which, in the PSAs, is Associated to the success criterion of the system

  9. Reliability Analysis Techniques for Communication Networks in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Lim, T. J.; Jang, S. C.; Kang, H. G.; Kim, M. C.; Eom, H. S.; Lee, H. J.

    2006-09-01

    The objectives of this project is to investigate and study existing reliability analysis techniques for communication networks in order to develop reliability analysis models for nuclear power plant's safety-critical networks. It is necessary to make a comprehensive survey of current methodologies for communication network reliability. Major outputs of this study are design characteristics of safety-critical communication networks, efficient algorithms for quantifying reliability of communication networks, and preliminary models for assessing reliability of safety-critical communication networks

  10. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  11. A neural network image reconstruction technique for electrical impedance tomography

    International Nuclear Information System (INIS)

    Adler, A.; Guardo, R.

    1994-01-01

    Reconstruction of Images in Electrical Impedance Tomography requires the solution of a nonlinear inverse problem on noisy data. This problem is typically ill-conditioned and requires either simplifying assumptions or regularization based on a priori knowledge. This paper presents a reconstruction algorithm using neural network techniques which calculates a linear approximation of the inverse problem directly from finite element simulations of the forward problem. This inverse is adapted to the geometry of the medium and the signal-to-noise ratio (SNR) used during network training. Results show good conductivity reconstruction where measurement SNR is similar to the training conditions. The advantages of this method are its conceptual simplicity and ease of implementation, and the ability to control the compromise between the noise performance and resolution of the image reconstruction

  12. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.

    2010-11-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

  13. Stochastic simulation of karst conduit networks

    Science.gov (United States)

    Pardo-Igúzquiza, Eulogio; Dowd, Peter A.; Xu, Chaoshui; Durán-Valsero, Juan José

    2012-01-01

    Karst aquifers have very high spatial heterogeneity. Essentially, they comprise a system of pipes (i.e., the network of conduits) superimposed on rock porosity and on a network of stratigraphic surfaces and fractures. This heterogeneity strongly influences the hydraulic behavior of the karst and it must be reproduced in any realistic numerical model of the karst system that is used as input to flow and transport modeling. However, the directly observed karst conduits are only a small part of the complete karst conduit system and knowledge of the complete conduit geometry and topology remains spatially limited and uncertain. Thus, there is a special interest in the stochastic simulation of networks of conduits that can be combined with fracture and rock porosity models to provide a realistic numerical model of the karst system. Furthermore, the simulated model may be of interest per se and other uses could be envisaged. The purpose of this paper is to present an efficient method for conditional and non-conditional stochastic simulation of karst conduit networks. The method comprises two stages: generation of conduit geometry and generation of topology. The approach adopted is a combination of a resampling method for generating conduit geometries from templates and a modified diffusion-limited aggregation method for generating the network topology. The authors show that the 3D karst conduit networks generated by the proposed method are statistically similar to observed karst conduit networks or to a hypothesized network model. The statistical similarity is in the sense of reproducing the tortuosity index of conduits, the fractal dimension of the network, the direction rose of directions, the Z-histogram and Ripley's K-function of the bifurcation points (which differs from a random allocation of those bifurcation points). The proposed method (1) is very flexible, (2) incorporates any experimental data (conditioning information) and (3) can easily be modified when

  14. Techniques Used in String Matching for Network Security

    OpenAIRE

    Jamuna Bhandari

    2014-01-01

    String matching also known as pattern matching is one of primary concept for network security. In this area the effectiveness and efficiency of string matching algorithms is important for applications in network security such as network intrusion detection, virus detection, signature matching and web content filtering system. This paper presents brief review on some of string matching techniques used for network security.

  15. Speeding Up Network Simulations Using Discrete Time

    OpenAIRE

    Lucas, Aaron; Armbruster, Benjamin

    2013-01-01

    We develop a way of simulating disease spread in networks faster at the cost of some accuracy. Instead of a discrete event simulation (DES) we use a discrete time simulation. This aggregates events into time periods. We prove a bound on the accuracy attained. We also discuss the choice of step size and do an analytical comparison of the computational costs. Our error bound concept comes from the theory of numerical methods for SDEs and the basic proof structure comes from the theory of numeri...

  16. Simulation of Stimuli-Responsive Polymer Networks

    Directory of Open Access Journals (Sweden)

    Thomas Gruhn

    2013-11-01

    Full Text Available The structure and material properties of polymer networks can depend sensitively on changes in the environment. There is a great deal of progress in the development of stimuli-responsive hydrogels for applications like sensors, self-repairing materials or actuators. Biocompatible, smart hydrogels can be used for applications, such as controlled drug delivery and release, or for artificial muscles. Numerical studies have been performed on different length scales and levels of details. Macroscopic theories that describe the network systems with the help of continuous fields are suited to study effects like the stimuli-induced deformation of hydrogels on large scales. In this article, we discuss various macroscopic approaches and describe, in more detail, our phase field model, which allows the calculation of the hydrogel dynamics with the help of a free energy that considers physical and chemical impacts. On a mesoscopic level, polymer systems can be modeled with the help of the self-consistent field theory, which includes the interactions, connectivity, and the entropy of the polymer chains, and does not depend on constitutive equations. We present our recent extension of the method that allows the study of the formation of nano domains in reversibly crosslinked block copolymer networks. Molecular simulations of polymer networks allow the investigation of the behavior of specific systems on a microscopic scale. As an example for microscopic modeling of stimuli sensitive polymer networks, we present our Monte Carlo simulations of a filament network system with crosslinkers.

  17. LANES - LOCAL AREA NETWORK EXTENSIBLE SIMULATOR

    Science.gov (United States)

    Gibson, J.

    1994-01-01

    The Local Area Network Extensible Simulator (LANES) provides a method for simulating the performance of high speed local area network (LAN) technology. LANES was developed as a design and analysis tool for networking on board the Space Station. The load, network, link and physical layers of a layered network architecture are all modeled. LANES models to different lower-layer protocols, the Fiber Distributed Data Interface (FDDI) and the Star*Bus. The load and network layers are included in the model as a means of introducing upper-layer processing delays associated with message transmission; they do not model any particular protocols. FDDI is an American National Standard and an International Organization for Standardization (ISO) draft standard for a 100 megabit-per-second fiber-optic token ring. Specifications for the LANES model of FDDI are taken from the Draft Proposed American National Standard FDDI Token Ring Media Access Control (MAC), document number X3T9.5/83-16 Rev. 10, February 28, 1986. This is a mature document describing the FDDI media-access-control protocol. Star*Bus, also known as the Fiber Optic Demonstration System, is a protocol for a 100 megabit-per-second fiber-optic star-topology LAN. This protocol, along with a hardware prototype, was developed by Sperry Corporation under contract to NASA Goddard Space Flight Center as a candidate LAN protocol for the Space Station. LANES can be used to analyze performance of a networking system based on either FDDI or Star*Bus under a variety of loading conditions. Delays due to upper-layer processing can easily be nullified, allowing analysis of FDDI or Star*Bus as stand-alone protocols. LANES is a parameter-driven simulation; it provides considerable flexibility in specifying both protocol an run-time parameters. Code has been optimized for fast execution and detailed tracing facilities have been included. LANES was written in FORTRAN 77 for implementation on a DEC VAX under VMS 4.6. It consists of two

  18. Reliability analysis using network simulation

    International Nuclear Information System (INIS)

    Engi, D.

    1985-01-01

    The models that can be used to provide estimates of the reliability of nuclear power systems operate at many different levels of sophistication. The least-sophisticated models treat failure processes that entail only time-independent phenomena (such as demand failure). More advanced models treat processes that also include time-dependent phenomena such as run failure and possibly repair. However, many of these dynamic models are deficient in some respects because they either disregard the time-dependent phenomena that cannot be expressed in closed-form analytic terms or because they treat these phenomena in quasi-static terms. The next level of modeling requires a dynamic approach that incorporates not only procedures for treating all significant time-dependent phenomena but also procedures for treating these phenomena when they are conditionally linked or characterized by arbitrarily selected probability distributions. The level of sophistication that is required is provided by a dynamic, Monte Carlo modeling approach. A computer code that uses a dynamic, Monte Carlo modeling approach is Q-GERT (Graphical Evaluation and Review Technique - with Queueing), and the present study had demonstrated the feasibility of using Q-GERT for modeling time-dependent, unconditionally and conditionally linked phenomena that are characterized by arbitrarily selected probability distributions

  19. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  20. Brian: a simulator for spiking neural networks in Python

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2008-11-01

    Full Text Available Brian is a new simulator for spiking neural networks, written in Python (http://brian.di.ens.fr. It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  1. Brian: a simulator for spiking neural networks in python.

    Science.gov (United States)

    Goodman, Dan; Brette, Romain

    2008-01-01

    "Brian" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  2. Synchronization of uncertain time-varying network based on sliding mode control technique

    Science.gov (United States)

    Lü, Ling; Li, Chengren; Bai, Suyuan; Li, Gang; Rong, Tingting; Gao, Yan; Yan, Zhe

    2017-09-01

    We research synchronization of uncertain time-varying network based on sliding mode control technique. The sliding mode control technique is first modified so that it can be applied to network synchronization. Further, by choosing the appropriate sliding surface, the identification law of uncertain parameter, the adaptive law of the time-varying coupling matrix element and the control input of network are designed, it is sure that the uncertain time-varying network can synchronize effectively the synchronization target. At last, we perform some numerical simulations to demonstrate the effectiveness of the proposed results.

  3. Visualization techniques in plasma numerical simulations

    International Nuclear Information System (INIS)

    Kulhanek, P.; Smetana, M.

    2004-01-01

    Numerical simulations of plasma processes usually yield a huge amount of raw numerical data. Information about electric and magnetic fields and particle positions and velocities can be typically obtained. There are two major ways of elaborating these data. First of them is called plasma diagnostics. We can calculate average values, variances, correlations of variables, etc. These results may be directly comparable with experiments and serve as the typical quantitative output of plasma simulations. The second possibility is the plasma visualization. The results are qualitative only, but serve as vivid display of phenomena in the plasma followed-up. An experience with visualizing electric and magnetic fields via Line Integral Convolution method is described in the first part of the paper. The LIC method serves for visualization of vector fields in two dimensional section of the three dimensional plasma. The field values can be known only in grid points of three-dimensional grid. The second part of the paper is devoted to the visualization techniques of the charged particle motion. The colour tint can be used for particle temperature representation. The motion can be visualized by a trace fading away with the distance from the particle. In this manner the impressive animations of the particle motion can be achieved. (author)

  4. Neural Network Emulation of Reionization Simulations

    Science.gov (United States)

    Schmit, Claude J.; Pritchard, Jonathan R.

    2018-05-01

    Next generation radio experiments such as LOFAR, HERA and SKA are expected to probe the Epoch of Reionization and claim a first direct detection of the cosmic 21cm signal within the next decade. One of the major challenges for these experiments will be dealing with enormous incoming data volumes. Machine learning is key to increasing our data analysis efficiency. We consider the use of an artificial neural network to emulate 21cmFAST simulations and use it in a Bayesian parameter inference study. We then compare the network predictions to a direct evaluation of the EoR simulations and analyse the dependence of the results on the training set size. We find that the use of a training set of size 100 samples can recover the error contours of a full scale MCMC analysis which evaluates the model at each step.

  5. The Network Protocol Analysis Technique in Snort

    Science.gov (United States)

    Wu, Qing-Xiu

    Network protocol analysis is a network sniffer to capture data for further analysis and understanding of the technical means necessary packets. Network sniffing is intercepted by packet assembly binary format of the original message content. In order to obtain the information contained. Required based on TCP / IP protocol stack protocol specification. Again to restore the data packets at protocol format and content in each protocol layer. Actual data transferred, as well as the application tier.

  6. Application perspectives of simulation techniques CFD in nuclear power plants

    International Nuclear Information System (INIS)

    Galindo G, I. F.

    2013-10-01

    The scenarios simulation in nuclear power plants is usually carried out with system codes that are based on concentrated parameters networks. However situations exist in some components where the flow is predominantly 3-D, as they are the natural circulation, mixed and stratification phenomena. The simulation techniques of computational fluid dynamics (CFD) have the potential to simulate these flows numerically. The use of CFD simulations embraces many branches of the engineering and continues growing, however, in relation to its application with respect to the problems related with the safety in nuclear power plants, has a smaller development, although is accelerating quickly and is expected that in the future they play a more emphasized paper in the analyses. A main obstacle to be able to achieve a general acceptance of the CFD is that the simulations should have very complete validation studies, sometimes not available. In this article a general panorama of the state of the methods application CFD in nuclear power plants is presented and the problem associated to its routine application and acceptance, including the view point of the regulatory authorities. Application examples are revised in those that the CFD offers real benefits and are also presented two illustrative study cases of the application of CFD techniques. The case of a water recipient with a heat source in its interior, similar to spent fuel pool of a nuclear power plant is presented firstly; and later the case of the Boron dilution of a water volume that enters to a nuclear reactor is presented. We can conclude that the CFD technology represents a very important opportunity to improve the phenomena understanding with a strong component 3-D and to contribute in the uncertainty reduction. (Author)

  7. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  8. Cognitive Heterogeneous Reconfigurable Optical Networks (CHRON): Enabling Technologies and Techniques

    DEFF Research Database (Denmark)

    Tafur Monroy, Idelfonso; Zibar, Darko; Guerrero Gonzalez, Neil

    2011-01-01

    We present the approach of cognition applied to heterogeneous optical networks developed in the framework of the EU project CHRON: Cognitive Heterogeneous Reconfigurable Optical Network. We introduce and discuss in particular the technologies and techniques that will enable a cognitive optical...... network to observe, act, learn and optimizes its performance, taking into account its high degree of heterogeneity with respect to quality of service, transmission and switching techniques....

  9. Mesoscopic Simulations of Crosslinked Polymer Networks

    Science.gov (United States)

    Megariotis, Grigorios; Vogiatzis, Georgios G.; Schneider, Ludwig; Müller, Marcus; Theodorou, Doros N.

    2016-08-01

    A new methodology and the corresponding C++ code for mesoscopic simulations of elastomers are presented. The test system, crosslinked ds-1’4-polyisoprene’ is simulated with a Brownian Dynamics/kinetic Monte Carlo algorithm as a dense liquid of soft, coarse-grained beads, each representing 5-10 Kuhn segments. From the thermodynamic point of view, the system is described by a Helmholtz free-energy containing contributions from entropic springs between successive beads along a chain, slip-springs representing entanglements between beads on different chains, and non-bonded interactions. The methodology is employed for the calculation of the stress relaxation function from simulations of several microseconds at equilibrium, as well as for the prediction of stress-strain curves of crosslinked polymer networks under deformation.

  10. Techniques for Intelligence Analysis of Networks

    National Research Council Canada - National Science Library

    Cares, Jeffrey R

    2005-01-01

    ...) there are significant intelligence analysis manifestations of these properties; and (4) a more satisfying theory of Networked Competition than currently exists for NCW/NCO is emerging from this research...

  11. Criminal Network Investigation: Processes, Tools, and Techniques

    DEFF Research Database (Denmark)

    Petersen, Rasmus Rosenqvist

    important challenge for criminal network investigation, despite the massive attention it receives from research and media. Challenges such as the investigation process, the context of the investigation, human factors such as thinking and creativity, and political decisions and legal laws are all challenges...... that could mean the success or failure of criminal network investigations. % include commission reports as indications of process related problems .. to "play a little politics" !! Information, process, and human factors, are challenges we find to be addressable by software system support. Based on those......Criminal network investigations such as police investigations, intelligence analysis, and investigative journalism involve a range of complex knowledge management processes and tasks. Criminal network investigators collect, process, and analyze information related to a specific target to create...

  12. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichirou; Dershowitz, W.

    2005-01-01

    During Heisei-16, Golder Associates provided support for JNC Tokai through discrete fracture network data analysis and simulation of the Mizunami Underground Research Laboratory (MIU), participation in Task 6 of the AEspoe Task Force on Modeling of Groundwater Flow and Transport, and development of methodologies for analysis of repository site characterization strategies and safety assessment. MIU support during H-16 involved updating the H-15 FracMan discrete fracture network (DFN) models for the MIU shaft region, and developing improved simulation procedures. Updates to the conceptual model included incorporation of 'Step2' (2004) versions of the deterministic structures, and revision of background fractures to be consistent with conductive structure data from the DH-2 borehole. Golder developed improved simulation procedures for these models through the use of hybrid discrete fracture network (DFN), equivalent porous medium (EPM), and nested DFN/EPM approaches. For each of these models, procedures were documented for the entire modeling process including model implementation, MMP simulation, and shaft grouting simulation. Golder supported JNC participation in Task 6AB, 6D and 6E of the AEspoe Task Force on Modeling of Groundwater Flow and Transport during H-16. For Task 6AB, Golder developed a new technique to evaluate the role of grout in performance assessment time-scale transport. For Task 6D, Golder submitted a report of H-15 simulations to SKB. For Task 6E, Golder carried out safety assessment time-scale simulations at the block scale, using the Laplace Transform Galerkin method. During H-16, Golder supported JNC's Total System Performance Assessment (TSPA) strategy by developing technologies for the analysis of the use site characterization data in safety assessment. This approach will aid in the understanding of the use of site characterization to progressively reduce site characterization uncertainty. (author)

  13. Fast, Accurate Memory Architecture Simulation Technique Using Memory Access Characteristics

    OpenAIRE

    小野, 貴継; 井上, 弘士; 村上, 和彰

    2007-01-01

    This paper proposes a fast and accurate memory architecture simulation technique. To design memory architecture, the first steps commonly involve using trace-driven simulation. However, expanding the design space makes the evaluation time increase. A fast simulation is achieved by a trace size reduction, but it reduces the simulation accuracy. Our approach can reduce the simulation time while maintaining the accuracy of the simulation results. In order to evaluate validity of proposed techniq...

  14. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  15. Analysis of Time Delay Simulation in Networked Control System

    OpenAIRE

    Nyan Phyo Aung; Zaw Min Naing; Hla Myo Tun

    2016-01-01

    The paper presents a PD controller for the Networked Control Systems (NCS) with delay. The major challenges in this networked control system (NCS) are the delay of the data transmission throughout the communication network. The comparative performance analysis is carried out for different delays network medium. In this paper, simulation is carried out on Ac servo motor control system using CAN Bus as communication network medium. The True Time toolbox of MATLAB is used for simulation to analy...

  16. Learning in innovation networks: Some simulation experiments

    Science.gov (United States)

    Gilbert, Nigel; Ahrweiler, Petra; Pyka, Andreas

    2007-05-01

    According to the organizational learning literature, the greatest competitive advantage a firm has is its ability to learn. In this paper, a framework for modeling learning competence in firms is presented to improve the understanding of managing innovation. Firms with different knowledge stocks attempt to improve their economic performance by engaging in radical or incremental innovation activities and through partnerships and networking with other firms. In trying to vary and/or to stabilize their knowledge stocks by organizational learning, they attempt to adapt to environmental requirements while the market strongly selects on the results. The simulation experiments show the impact of different learning activities, underlining the importance of innovation and learning.

  17. Mobile-ip Aeronautical Network Simulation Study

    Science.gov (United States)

    Ivancic, William D.; Tran, Diepchi T.

    2001-01-01

    NASA is interested in applying mobile Internet protocol (mobile-ip) technologies to its space and aeronautics programs. In particular, mobile-ip will play a major role in the Advanced Aeronautic Transportation Technology (AATT), the Weather Information Communication (WINCOMM), and the Small Aircraft Transportation System (SATS) aeronautics programs. This report presents the results of a simulation study of mobile-ip for an aeronautical network. The study was performed to determine the performance of the transmission control protocol (TCP) in a mobile-ip environment and to gain an understanding of how long delays, handoffs, and noisy channels affect mobile-ip performance.

  18. Primitive chain network simulations of probe rheology.

    Science.gov (United States)

    Masubuchi, Yuichi; Amamoto, Yoshifumi; Pandey, Ankita; Liu, Cheng-Yang

    2017-09-27

    Probe rheology experiments, in which the dynamics of a small amount of probe chains dissolved in immobile matrix chains is discussed, have been performed for the development of molecular theories for entangled polymer dynamics. Although probe chain dynamics in probe rheology is considered hypothetically as single chain dynamics in fixed tube-shaped confinement, it has not been fully elucidated. For instance, the end-to-end relaxation of probe chains is slower than that for monodisperse melts, unlike the conventional molecular theories. In this study, the viscoelastic and dielectric relaxations of probe chains were calculated by primitive chain network simulations. The simulations semi-quantitatively reproduced the dielectric relaxation, which reflects the effect of constraint release on the end-to-end relaxation. Fair agreement was also obtained for the viscoelastic relaxation time. However, the viscoelastic relaxation intensity was underestimated, possibly due to some flaws in the model for the inter-chain cross-correlations between probe and matrix chains.

  19. Chain networking revealed by molecular dynamics simulation

    Science.gov (United States)

    Zheng, Yexin; Tsige, Mesfin; Wang, Shi-Qing

    Based on Kremer-Grest model for entangled polymer melts, we demonstrate how the response of a polymer glass depends critically on the chain length. After quenching two melts of very different chain lengths (350 beads per chain and 30 beads per chain) into deeply glassy states, we subject them to uniaxial extension. Our MD simulations show that the glass of long chains undergoes stable necking after yielding whereas the system of short chains is unable to neck and breaks up after strain localization. During ductile extension of the polymer glass made of long chain significant chain tension builds up in the load-bearing strands (LBSs). Further analysis is expected to reveal evidence of activation of the primary structure during post-yield extension. These results lend support to the recent molecular model 1 and are the simulations to demonstrate the role of chain networking. This work is supported, in part, by a NSF Grant (DMR-EAGER-1444859)

  20. Modeling And Simulation Of Multimedia Communication Networks

    Science.gov (United States)

    Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.

    1989-05-01

    In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.

  1. Geophysical worldwide networks: basic concepts and techniques

    International Nuclear Information System (INIS)

    Ruzie, G.; Baubron, G.

    1997-01-01

    The detection of nuclear explosions around the globe requires the setting up of networks of sensors on a worldwide basis. Such equipment should be able to transmit on-line data in real-time or pseudo real-time to a center or processing centers. The high level of demanded reliability for the data (generally better than 99 %) also has an impact on the accuracy and precision of the sensors and the communications technology, as well as the systems used for on-line checking. In the light of these requirements, DAM has developed a data gathering network based on the principle of VSTA duplex links which ensures the on-line transmission of data and operational parameters towards the Processing Centre via a hub. In the other direction, the Centre can act on a number of parameters in order to correct them if necessary, or notify the local maintenance team. To optimize the reliability of the main components of this system, the detection stations as well as their associated beacons have low consumption and can be supplied by solar panels, thus facilitating the installation of the networks. The seismic network on the French national territory is composed of 40 stations built on the principles outlined above. In order to gather data from stations established outside France, DAM is planning to use an analogue system to transmit data in on-line as well as off-line mode. (authors)

  2. Neural network stochastic simulation applied for quantifying uncertainties

    Directory of Open Access Journals (Sweden)

    N Foudil-Bey

    2016-09-01

    Full Text Available Generally the geostatistical simulation methods are used to generate several realizations of physical properties in the sub-surface, these methods are based on the variogram analysis and limited to measures correlation between variables at two locations only. In this paper, we propose a simulation of properties based on supervised Neural network training at the existing drilling data set. The major advantage is that this method does not require a preliminary geostatistical study and takes into account several points. As a result, the geological information and the diverse geophysical data can be combined easily. To do this, we used a neural network with multi-layer perceptron architecture like feed-forward, then we used the back-propagation algorithm with conjugate gradient technique to minimize the error of the network output. The learning process can create links between different variables, this relationship can be used for interpolation of the properties on the one hand, or to generate several possible distribution of physical properties on the other hand, changing at each time and a random value of the input neurons, which was kept constant until the period of learning. This method was tested on real data to simulate multiple realizations of the density and the magnetic susceptibility in three-dimensions at the mining camp of Val d'Or, Québec (Canada.

  3. Event-based simulation of networks with pulse delayed coupling

    Science.gov (United States)

    Klinshov, Vladimir; Nekorkin, Vladimir

    2017-10-01

    Pulse-mediated interactions are common in networks of different nature. Here we develop a general framework for simulation of networks with pulse delayed coupling. We introduce the discrete map governing the dynamics of such networks and describe the computation algorithm for its numerical simulation.

  4. optimal assembly line balancing using simulation techniques

    African Journals Online (AJOL)

    user

    Department of Mechanical Engineering ... perspective on how the business operates, and ... Process simulation allows management ... improvement and change since it would be a costly ... The work content performed on an assembly line.

  5. Distributed cluster management techniques for unattended ground sensor networks

    Science.gov (United States)

    Essawy, Magdi A.; Stelzig, Chad A.; Bevington, James E.; Minor, Sharon

    2005-05-01

    Smart Sensor Networks are becoming important target detection and tracking tools. The challenging problems in such networks include the sensor fusion, data management and communication schemes. This work discusses techniques used to distribute sensor management and multi-target tracking responsibilities across an ad hoc, self-healing cluster of sensor nodes. Although miniaturized computing resources possess the ability to host complex tracking and data fusion algorithms, there still exist inherent bandwidth constraints on the RF channel. Therefore, special attention is placed on the reduction of node-to-node communications within the cluster by minimizing unsolicited messaging, and distributing the sensor fusion and tracking tasks onto local portions of the network. Several challenging problems are addressed in this work including track initialization and conflict resolution, track ownership handling, and communication control optimization. Emphasis is also placed on increasing the overall robustness of the sensor cluster through independent decision capabilities on all sensor nodes. Track initiation is performed using collaborative sensing within a neighborhood of sensor nodes, allowing each node to independently determine if initial track ownership should be assumed. This autonomous track initiation prevents the formation of duplicate tracks while eliminating the need for a central "management" node to assign tracking responsibilities. Track update is performed as an ownership node requests sensor reports from neighboring nodes based on track error covariance and the neighboring nodes geo-positional location. Track ownership is periodically recomputed using propagated track states to determine which sensing node provides the desired coverage characteristics. High fidelity multi-target simulation results are presented, indicating the distribution of sensor management and tracking capabilities to not only reduce communication bandwidth consumption, but to also

  6. Outlier Detection Techniques For Wireless Sensor Networks: A Survey

    NARCIS (Netherlands)

    Zhang, Y.; Meratnia, Nirvana; Havinga, Paul J.M.

    2008-01-01

    In the field of wireless sensor networks, measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are

  7. Recognition of decays of charged tracks with neural network techniques

    International Nuclear Information System (INIS)

    Stimpfl-Abele, G.

    1991-01-01

    We developed neural-network learning techniques for the recognition of decays of charged tracks using a feed-forward network with error back-propagation. Two completely different methods are described in detail and their efficiencies for several NN architectures are compared with conventional methods. Excellent results are obtained. (orig.)

  8. Real time simulation techniques in Taiwan - Maanshan compact simulator

    International Nuclear Information System (INIS)

    Liang, K.-S.; Chuang, Y.-M.; Ko, H.-T.

    2004-01-01

    Recognizing the demand and potential market of simulators in various industries, a special project for real time simulation technology transfer was initiated in Taiwan in 1991. In this technology transfer program, the most advanced real-time dynamic modules for nuclear power simulation were introduced. Those modules can be divided into two categories; one is modeling related to catch dynamic response of each system, and the other is computer related to provide special real time computing environment and man-machine interface. The modeling related modules consist of the thermodynamic module, the three-dimensional core neutronics module and the advanced balance of plant module. As planned in the project, the technology transfer team should build a compact simulator for the Maanshan power plant before the end of the project to demonstrate the success of the technology transfer program. The compact simulator was designed to support the training from the regular full scope simulator which was already equipped in the Maanshan plant. The feature of this compact simulator focused on providing know-why training by the enhanced graphic display. The potential users were identified as senior operators, instructors and nuclear engineers. Total about 13 important systems were covered in the scope of the compact simulator, and multi-graphic displays from three color monitors mounted on the 10 feet compact panel were facilitated to help the user visualize detailed phenomena under scenarios of interest. (author)

  9. Survey of Green Radio Communications Networks: Techniques and Recent Advances

    Directory of Open Access Journals (Sweden)

    Mohammed H. Alsharif

    2013-01-01

    Full Text Available Energy efficiency in cellular networks has received significant attention from both academia and industry because of the importance of reducing the operational expenditures and maintaining the profitability of cellular networks, in addition to making these networks “greener.” Because the base station is the primary energy consumer in the network, efforts have been made to study base station energy consumption and to find ways to improve energy efficiency. In this paper, we present a brief review of the techniques that have been used recently to improve energy efficiency, such as energy-efficient power amplifier techniques, time-domain techniques, cell switching, management of the physical layer through multiple-input multiple-output (MIMO management, heterogeneous network architectures based on Micro-Pico-Femtocells, cell zooming, and relay techniques. In addition, this paper discusses the advantages and disadvantages of each technique to contribute to a better understanding of each of the techniques and thereby offer clear insights to researchers about how to choose the best ways to reduce energy consumption in future green radio networks.

  10. The design of a network emulation and simulation laboratory

    CSIR Research Space (South Africa)

    Von Solms, S

    2015-07-01

    Full Text Available The development of the Network Emulation and Simulation Laboratory is motivated by the drive to contribute to the enhancement of the security and resilience of South Africa's critical information infrastructure. The goal of the Network Emulation...

  11. A general software reliability process simulation technique

    Science.gov (United States)

    Tausworthe, Robert C.

    1991-01-01

    The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.

  12. Programmable multi-node quantum network design and simulation

    Science.gov (United States)

    Dasari, Venkat R.; Sadlier, Ronald J.; Prout, Ryan; Williams, Brian P.; Humble, Travis S.

    2016-05-01

    Software-defined networking offers a device-agnostic programmable framework to encode new network functions. Externally centralized control plane intelligence allows programmers to write network applications and to build functional network designs. OpenFlow is a key protocol widely adopted to build programmable networks because of its programmability, flexibility and ability to interconnect heterogeneous network devices. We simulate the functional topology of a multi-node quantum network that uses programmable network principles to manage quantum metadata for protocols such as teleportation, superdense coding, and quantum key distribution. We first show how the OpenFlow protocol can manage the quantum metadata needed to control the quantum channel. We then use numerical simulation to demonstrate robust programmability of a quantum switch via the OpenFlow network controller while executing an application of superdense coding. We describe the software framework implemented to carry out these simulations and we discuss near-term efforts to realize these applications.

  13. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection

    Directory of Open Access Journals (Sweden)

    Declan T. Delaney

    2016-12-01

    Full Text Available No single network solution for Internet of Things (IoT networks can provide the required level of Quality of Service (QoS for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  14. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection.

    Science.gov (United States)

    Delaney, Declan T; O'Hare, Gregory M P

    2016-12-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  15. Integrated workflows for spiking neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Ján eAntolík

    2013-12-01

    Full Text Available The increasing availability of computational resources is enabling more detailed, realistic modelling in computational neuroscience, resulting in a shift towards more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeller's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modellers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity.To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualisation into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organised configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualisation stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modelling studies by relieving the user from manual handling of the flow of metadata between the individual

  16. Research in Network Management Techniques for Tactical Data Communications Network.

    Science.gov (United States)

    1982-09-01

    inct be given here. Comput. Sci. Dept , Univ California. LosAngeles 1978. hijve not been ale to derive a similar 3) RI Kan " Teo ;-wtnofcmur r.hpfor...8217 ii "ifr 5 i.. ’rii kc ,e i,?- I iiciiiei it tic i 1416t (),t 1474 i st - - - - - 0 ’I I ’Si j U. ~h1 - I ŕ .1 -.4 .0 B.2 A Simulation Study of a

  17. Comparison of radiographic technique by computer simulation

    International Nuclear Information System (INIS)

    Brochi, M.A.C.; Ghilardi Neto, T.

    1989-01-01

    A computational algorithm to compare radiographic techniques (KVp, mAs and filters) is developed based in the fixation of parameters that defines the images, such as optical density and constrast. Before the experience, the results were used in a radiography of thorax. (author) [pt

  18. Simulation of aluminium STIR casting technique

    International Nuclear Information System (INIS)

    Hafizal Yazid; Mohd Harun; Hanani Yazid; Abd Aziz Mohamed; Muhammad Rawi Muhammad Zain; Zaiton Selamat; Mohd Shariff Sattar; Muhamad Jalil; Ismail Mustapha; Razali Kasim

    2006-01-01

    In this paper, the objective is to determine the optimum impeller speed correlated with holding time to achieve homogeneous reinforcement distribution for a particular set of experimental condition. Attempts are made to simulate the flow behaviourof the liquid aluminium using FLUENT software. Stepwise impeller speed ranging from 50 to 300 rpm.with 2 impeller angle blades of 45 and 90 degree with respect to the rotational plane were used

  19. Characterization of Background Traffic in Hybrid Network Simulation

    National Research Council Canada - National Science Library

    Lauwens, Ben; Scheers, Bart; Van de Capelle, Antoine

    2006-01-01

    .... Two approaches are common: discrete event simulation and fluid approximation. A discrete event simulation generates a huge amount of events for a full-blown battlefield communication network resulting in a very long runtime...

  20. BioNessie - a grid enabled biochemical networks simulation environment

    OpenAIRE

    Liu, X.; Jiang, J.; Ajayi, O.; Gu, X.; Gilbert, D.; Sinnott, R.O.

    2008-01-01

    The simulation of biochemical networks provides insight and understanding about the underlying biochemical processes and pathways used by cells and organisms. BioNessie is a biochemical network simulator which has been developed at the University of Glasgow. This paper describes the simulator and focuses in particular on how it has been extended to benefit from a wide variety of high performance compute resources across the UK through Grid technologies to support larger scale simulations.

  1. The Application of Helicopter Rotor Defect Detection Using Wavelet Analysis and Neural Network Technique

    Directory of Open Access Journals (Sweden)

    Jin-Li Sun

    2014-06-01

    Full Text Available When detect the helicopter rotor beam with ultrasonic testing, it is difficult to realize the noise removing and quantitative testing. This paper used the wavelet analysis technique to remove the noise among the ultrasonic detection signal and highlight the signal feature of defect, then drew the curve of defect size and signal amplitude. Based on the relationship of defect size and signal amplitude, a BP neural network was built up and the corresponding estimated value of the simulate defect was obtained by repeating training. It was confirmed that the wavelet analysis and neural network technique met the requirements of practical testing.

  2. Development of nuclear power plant diagnosis technique using neural networks

    International Nuclear Information System (INIS)

    Horiguchi, Masahiro; Fukawa, Naohiro; Nishimura, Kazuo

    1991-01-01

    A nuclear power plant diagnosis technique has been developed, called transient phenomena analysis, which employs neural network. The neural networks identify malfunctioning equipment by recognizing the pattern of main plant parameters, making it possible to locate the cause of an abnormality when a plant is in a transient state. In a case where some piece of equipment shows abnormal behavior, many plant parameters either directly or indirectly related to that equipment change simultaneously. When an abrupt change in a plant parameter is detected, changes in the 49 main plant parameters are classified into three types and a characteristic change pattern consisting of 49 data is defined. The neural networks then judge the cause of the abnormality from this pattern. This neural-network-based technique can recognize 100 patterns that are characterized by the causes of plant abnormality. (author)

  3. The design and implementation of a network simulation platform

    CSIR Research Space (South Africa)

    Von Solms, S

    2013-11-01

    Full Text Available these events and their effects can enable researchers to identify these threats and find ways to counter them. In this paper we present the design of a network simulation platform which can enable researchers to study dynamic behaviour of networks, network...

  4. Accelerator and feedback control simulation using neural networks

    International Nuclear Information System (INIS)

    Nguyen, D.; Lee, M.; Sass, R.; Shoaee, H.

    1991-05-01

    Unlike present constant model feedback system, neural networks can adapt as the dynamics of the process changes with time. Using a process model, the ''Accelerator'' network is first trained to simulate the dynamics of the beam for a given beam line. This ''Accelerator'' network is then used to train a second ''Controller'' network which performs the control function. In simulation, the networks are used to adjust corrector magnetics to control the launch angle and position of the beam to keep it on the desired trajectory when the incoming beam is perturbed. 4 refs., 3 figs

  5. Simulation and Evaluation of Ethernet Passive Optical Network

    Directory of Open Access Journals (Sweden)

    Salah A. Jaro Alabady

    2013-05-01

    Full Text Available      This paper studies simulation and evaluation of Ethernet Passive Optical Network (EPON system, IEEE802.3ah based OPTISM 3.6 simulation program. The simulation program is used in this paper to build a typical ethernet passive optical network, and to evaluate the network performance when using the (1580, 1625 nm wavelength instead of (1310, 1490 nm that used in Optical Line Terminal (OLT and Optical Network Units (ONU's in system architecture of Ethernet passive optical network at different bit rate and different fiber optic length. The results showed enhancement in network performance by increase the number of nodes (subscribers connected to the network, increase the transmission distance, reduces the received power and reduces the Bit Error Rate (BER.   

  6. Acceleration techniques for dependability simulation. M.S. Thesis

    Science.gov (United States)

    Barnette, James David

    1995-01-01

    As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.

  7. Network simulation of nonstationary ionic transport through liquid junctions

    International Nuclear Information System (INIS)

    Castilla, J.; Horno, J.

    1993-01-01

    Nonstationary ionic transport across the liquid junctions has been studied using Network Thermodynamics. A network model for the time-dependent Nernst-Plack-Poisson system of equation is proposed. With this network model and the electrical circuit simulation program PSPICE, the concentrations, charge density, and electrical potentials, at short times, have been simulated for the binary system NaCl/NaCl. (Author) 13 refs

  8. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

    Directory of Open Access Journals (Sweden)

    Jan Hahne

    2017-05-01

    Full Text Available Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  9. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    Science.gov (United States)

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  10. Advanced network programming principles and techniques : network application programming with Java

    CERN Document Server

    Ciubotaru, Bogdan

    2013-01-01

    Answering the need for an accessible overview of the field, this text/reference presents a manageable introduction to both the theoretical and practical aspects of computer networks and network programming. Clearly structured and easy to follow, the book describes cutting-edge developments in network architectures, communication protocols, and programming techniques and models, supported by code examples for hands-on practice with creating network-based applications. Features: presents detailed coverage of network architectures; gently introduces the reader to the basic ideas underpinning comp

  11. A technique for choosing an option for SDH network upgrade

    Directory of Open Access Journals (Sweden)

    V. A. Bulanov

    2014-01-01

    Full Text Available Rapidly developing data transmission technologies result in making the network equipment modernization inevitable. There are various options to upgrade the SDH networks, for example, by increasing the capacity of network overloaded sites, the entire network capacity by replacement of the equipment or by creation of a parallel network, by changing the network structure with the organization of multilevel hierarchy of a network, etc. All options vary in a diversity of parameters starting with the solution cost and ending with the labor intensiveness of their realization. Thus, there are no certain standard approaches to the rules to choose an option for the network development. The article offers the technique for choosing the SHD network upgrade based on method of expert evaluations using as a tool the software complex that allows us to have quickly the quantitative characteristics of proposed network option. The technique is as follows:1. Forming a perspective matrix of services inclination to the SDH networks.2. Developing the several possible options for a network modernization.3. Formation of the list of criteria and a definition of indicators to characterize them by two groups, namely costs of the option implementation and arising losses; positive effect from the option introduction.4. Criteria weight coefficients purpose.5. Indicators value assessment within each criterion for each option by each expert. Rationing of the obtained values of indicators in relation to the maximum value of an indicator among all options.6. Calculating the integrated indicators of for each option by criteria groups.7. Creating a set of Pareto by drawing two criteria groups of points, which correspond to all options in the system of coordinates on the plane. Option choice.In implementation of point 2 the indicators derivation owing to software complex plays a key role. This complex should produce a structure of the network equipment, types of multiplexer sections

  12. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo; Artina, Marco; Foransier, Massimo; Markowich, Peter A.

    2015-01-01

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation

  13. Toward Designing a Quantum Key Distribution Network Simulation Model

    OpenAIRE

    Miralem Mehic; Peppino Fazio; Miroslav Voznak; Erik Chromy

    2016-01-01

    As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several ...

  14. Design on intelligent gateway technique in home network

    Science.gov (United States)

    Hu, Zhonggong; Feng, Xiancheng

    2008-12-01

    Based on digitization, multimedia, mobility, wide band, real-time interaction and so on,family networks, because can provide diverse and personalized synthesis service in information, correspondence work, entertainment, education and health care and so on, are more and more paid attention by the market. The family network product development has become the focus of the related industry. In this paper,the concept of the family network and the overall reference model of the family network are introduced firstly.Then the core techniques and the correspondence standard related with the family network are proposed.The key analysis is made for the function of family gateway, the function module of the software,the key technologies to client side software architecture and the trend of development of the family network entertainment seeing and hearing service and so on. Product present situation of the family gateway and the future trend of development, application solution of the digital family service are introduced. The development of the family network product bringing about the digital family network industry is introduced finally.It causes the development of software industries,such as communication industry,electrical appliances industry, computer and game and so on.It also causes the development of estate industry.

  15. THE COMPUTATIONAL INTELLIGENCE TECHNIQUES FOR PREDICTIONS - ARTIFICIAL NEURAL NETWORKS

    OpenAIRE

    Mary Violeta Bar

    2014-01-01

    The computational intelligence techniques are used in problems which can not be solved by traditional techniques when there is insufficient data to develop a model problem or when they have errors.Computational intelligence, as he called Bezdek (Bezdek, 1992) aims at modeling of biological intelligence. Artificial Neural Networks( ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is solving problems that are too c...

  16. An optimization planning technique for Suez Canal Network in Egypt

    Energy Technology Data Exchange (ETDEWEB)

    Abou El-Ela, A.A.; El-Zeftawy, A.A.; Allam, S.M.; Atta, Gasir M. [Electrical Engineering Dept., Faculty of Eng., Shebin El-Kom (Egypt)

    2010-02-15

    This paper introduces a proposed optimization technique POT for predicting the peak load demand and planning of transmission line systems. Many of traditional methods have been presented for long-term load forecasting of electrical power systems. But, the results of these methods are approximated. Therefore, the artificial neural network (ANN) technique for long-term peak load forecasting is modified and discussed as a modern technique in long-term load forecasting. The modified technique is applied on the Egyptian electrical network dependent on its historical data to predict the electrical peak load demand forecasting up to year 2017. This technique is compared with extrapolation of trend curves as a traditional method. The POT is applied also to obtain the optimal planning of transmission lines for the 220 kV of Suez Canal Network (SCN) using the ANN technique. The minimization of the transmission network costs are considered as an objective function, while the transmission lines (TL) planning constraints are satisfied. Zafarana site on the Red Sea coast is considered as an optimal site for installing big wind farm (WF) units in Egypt. So, the POT is applied to plan both the peak load and the electrical transmission of SCN with and without considering WF to develop the impact of WF units on the electrical transmission system of Egypt, considering the reliability constraints which were taken as a separate model in the previous techniques. The application on SCN shows the capability and the efficiently of the proposed techniques to obtain the predicting peak load demand and the optimal planning of transmission lines of SCN up to year 2017. (author)

  17. On Applicability of Network Coding Technique for 6LoWPAN-based Sensor Networks.

    Science.gov (United States)

    Amanowicz, Marek; Krygier, Jaroslaw

    2018-05-26

    In this paper, the applicability of the network coding technique in 6LoWPAN-based sensor multihop networks is examined. The 6LoWPAN is one of the standards proposed for the Internet of Things architecture. Thus, we can expect the significant growth of traffic in such networks, which can lead to overload and decrease in the sensor network lifetime. The authors propose the inter-session network coding mechanism that can be implemented in resource-limited sensor motes. The solution reduces the overall traffic in the network, and in consequence, the energy consumption is decreased. Used procedures take into account deep header compressions of the native 6LoWPAN packets and the hop-by-hop changes of the header structure. Applied simplifications reduce signaling traffic that is typically occurring in network coding deployments, keeping the solution usefulness for the wireless sensor networks with limited resources. The authors validate the proposed procedures in terms of end-to-end packet delay, packet loss ratio, traffic in the air, total energy consumption, and network lifetime. The solution has been tested in a real wireless sensor network. The results confirm the efficiency of the proposed technique, mostly in delay-tolerant sensor networks.

  18. An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks.

    Science.gov (United States)

    Abba, Sani; Lee, Jeong-A

    2015-08-18

    We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network.

  19. An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks

    Science.gov (United States)

    Abba, Sani; Lee, Jeong-A

    2015-01-01

    We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network. PMID:26295236

  20. Recent developments in numerical simulation techniques of thermal recovery processes

    Energy Technology Data Exchange (ETDEWEB)

    Tamim, M. [Bangladesh University of Engineering and Technology, Bangladesh (Bangladesh); Abou-Kassem, J.H. [Chemical and Petroleum Engineering Department, UAE University, Al-Ain 17555 (United Arab Emirates); Farouq Ali, S.M. [University of Alberta, Alberta (Canada)

    2000-05-01

    Numerical simulation of thermal processes (steam flooding, steam stimulation, SAGD, in-situ combustion, electrical heating, etc.) is an integral part of a thermal project design. The general tendency in the last 10 years has been to use commercial simulators. During the last decade, only a few new models have been reported in the literature. More work has been done to modify and refine solutions to existing problems to improve the efficiency of simulators. The paper discusses some of the recent developments in simulation techniques of thermal processes such as grid refinement, grid orientation, effect of temperature on relative permeability, mathematical models, and solution methods. The various aspects of simulation discussed here promote better understanding of the problems encountered in the simulation of thermal processes and will be of value to both simulator users and developers.

  1. Exploring machine-learning-based control plane intrusion detection techniques in software defined optical networks

    Science.gov (United States)

    Zhang, Huibin; Wang, Yuqiao; Chen, Haoran; Zhao, Yongli; Zhang, Jie

    2017-12-01

    In software defined optical networks (SDON), the centralized control plane may encounter numerous intrusion threatens which compromise the security level of provisioned services. In this paper, the issue of control plane security is studied and two machine-learning-based control plane intrusion detection techniques are proposed for SDON with properly selected features such as bandwidth, route length, etc. We validate the feasibility and efficiency of the proposed techniques by simulations. Results show an accuracy of 83% for intrusion detection can be achieved with the proposed machine-learning-based control plane intrusion detection techniques.

  2. Comparing Generative Adversarial Network Techniques for Image Creation and Modification

    NARCIS (Netherlands)

    Pieters, Mathijs; Wiering, Marco

    2018-01-01

    Generative adversarial networks (GANs) have demonstrated to be successful at generating realistic real-world images. In this paper we compare various GAN techniques, both supervised and unsupervised. The effects on training stability of different objective functions are compared. We add an encoder

  3. MIMO Techniques for Jamming Threat Suppression in Vehicular Networks

    Directory of Open Access Journals (Sweden)

    Dimitrios Kosmanos

    2016-01-01

    Full Text Available Vehicular ad hoc networks have emerged as a promising field of research and development, since they will be able to accommodate a variety of applications, ranging from infotainment to traffic management and road safety. A specific security-related concern that vehicular ad hoc networks face is how to keep communication alive in the presence of radio frequency jamming, especially during emergency situations. Multiple Input Multiple Output techniques are proven to be able to improve some crucial parameters of vehicular communications such as communication range and throughput. In this article, we investigate how Multiple Input Multiple Output techniques can be used in vehicular ad hoc networks as active defense mechanisms in order to avoid jamming threats. For this reason, a variation of spatial multiplexing is proposed, namely, vSP4, which achieves not only high throughput but also a stable diversity gain upon the interference of a malicious jammer.

  4. BioNSi: A Discrete Biological Network Simulator Tool.

    Science.gov (United States)

    Rubinstein, Amir; Bracha, Noga; Rudner, Liat; Zucker, Noga; Sloin, Hadas E; Chor, Benny

    2016-08-05

    Modeling and simulation of biological networks is an effective and widely used research methodology. The Biological Network Simulator (BioNSi) is a tool for modeling biological networks and simulating their discrete-time dynamics, implemented as a Cytoscape App. BioNSi includes a visual representation of the network that enables researchers to construct, set the parameters, and observe network behavior under various conditions. To construct a network instance in BioNSi, only partial, qualitative biological data suffices. The tool is aimed for use by experimental biologists and requires no prior computational or mathematical expertise. BioNSi is freely available at http://bionsi.wix.com/bionsi , where a complete user guide and a step-by-step manual can also be found.

  5. High Dimensional Modulation and MIMO Techniques for Access Networks

    DEFF Research Database (Denmark)

    Binti Othman, Maisara

    Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...

  6. Whitelists Based Multiple Filtering Techniques in SCADA Sensor Networks

    Directory of Open Access Journals (Sweden)

    DongHo Kang

    2014-01-01

    Full Text Available Internet of Things (IoT consists of several tiny devices connected together to form a collaborative computing environment. Recently IoT technologies begin to merge with supervisory control and data acquisition (SCADA sensor networks to more efficiently gather and analyze real-time data from sensors in industrial environments. But SCADA sensor networks are becoming more and more vulnerable to cyber-attacks due to increased connectivity. To safely adopt IoT technologies in the SCADA environments, it is important to improve the security of SCADA sensor networks. In this paper we propose a multiple filtering technique based on whitelists to detect illegitimate packets. Our proposed system detects the traffic of network and application protocol attacks with a set of whitelists collected from normal traffic.

  7. Mobility management techniques for the next-generation wireless networks

    Science.gov (United States)

    Sun, Junzhao; Howie, Douglas P.; Sauvola, Jaakko J.

    2001-10-01

    The tremendous demands from social market are pushing the booming development of mobile communications faster than ever before, leading to plenty of new advanced techniques emerging. With the converging of mobile and wireless communications with Internet services, the boundary between mobile personal telecommunications and wireless computer networks is disappearing. Wireless networks of the next generation need the support of all the advances on new architectures, standards, and protocols. Mobility management is an important issue in the area of mobile communications, which can be best solved at the network layer. One of the key features of the next generation wireless networks is all-IP infrastructure. This paper discusses the mobility management schemes for the next generation mobile networks through extending IP's functions with mobility support. A global hierarchical framework model for the mobility management of wireless networks is presented, in which the mobility management is divided into two complementary tasks: macro mobility and micro mobility. As the macro mobility solution, a basic principle of Mobile IP is introduced, together with the optimal schemes and the advances in IPv6. The disadvantages of the Mobile IP on solving the micro mobility problem are analyzed, on the basis of which three main proposals are discussed as the micro mobility solutions for mobile communications, including Hierarchical Mobile IP (HMIP), Cellular IP, and Handoff-Aware Wireless Access Internet Infrastructure (HAWAII). A unified model is also described in which the different micro mobility solutions can coexist simultaneously in mobile networks.

  8. Graphical user interface for wireless sensor networks simulator

    Science.gov (United States)

    Paczesny, Tomasz; Paczesny, Daniel; Weremczuk, Jerzy

    2008-01-01

    Wireless Sensor Networks (WSN) are currently very popular area of development. It can be suited in many applications form military through environment monitoring, healthcare, home automation and others. Those networks, when working in dynamic, ad-hoc model, need effective protocols which must differ from common computer networks algorithms. Research on those protocols would be difficult without simulation tool, because real applications often use many nodes and tests on such a big networks take much effort and costs. The paper presents Graphical User Interface (GUI) for simulator which is dedicated for WSN studies, especially in routing and data link protocols evaluation.

  9. A Flexible System for Simulating Aeronautical Telecommunication Network

    Science.gov (United States)

    Maly, Kurt; Overstreet, C. M.; Andey, R.

    1998-01-01

    At Old Dominion University, we have built Aeronautical Telecommunication Network (ATN) Simulator with NASA being the fund provider. It provides a means to evaluate the impact of modified router scheduling algorithms on the network efficiency, to perform capacity studies on various network topologies and to monitor and study various aspects of ATN through graphical user interface (GUI). In this paper we describe briefly about the proposed ATN model and our abstraction of this model. Later we describe our simulator architecture highlighting some of the design specifications, scheduling algorithms and user interface. At the end, we have provided the results of performance studies on this simulator.

  10. Under-Frequency Load Shedding Technique Considering Event-Based for an Islanded Distribution Network

    Directory of Open Access Journals (Sweden)

    Hasmaini Mohamad

    2016-06-01

    Full Text Available One of the biggest challenge for an islanding operation is to sustain the frequency stability. A large power imbalance following islanding would cause under-frequency, hence an appropriate control is required to shed certain amount of load. The main objective of this research is to develop an adaptive under-frequency load shedding (UFLS technique for an islanding system. The technique is designed considering an event-based which includes the moment system is islanded and a tripping of any DG unit during islanding operation. A disturbance magnitude is calculated to determine the amount of load to be shed. The technique is modeled by using PSCAD simulation tool. A simulation studies on a distribution network with mini hydro generation is carried out to evaluate the UFLS model. It is performed under different load condition: peak and base load. Results show that the load shedding technique have successfully shed certain amount of load and stabilized the system frequency.

  11. Parallel discrete-event simulation of FCFS stochastic queueing networks

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.

  12. Simulating Quantitative Cellular Responses Using Asynchronous Threshold Boolean Network Ensembles

    Directory of Open Access Journals (Sweden)

    Shah Imran

    2011-07-01

    Full Text Available Abstract Background With increasing knowledge about the potential mechanisms underlying cellular functions, it is becoming feasible to predict the response of biological systems to genetic and environmental perturbations. Due to the lack of homogeneity in living tissues it is difficult to estimate the physiological effect of chemicals, including potential toxicity. Here we investigate a biologically motivated model for estimating tissue level responses by aggregating the behavior of a cell population. We assume that the molecular state of individual cells is independently governed by discrete non-deterministic signaling mechanisms. This results in noisy but highly reproducible aggregate level responses that are consistent with experimental data. Results We developed an asynchronous threshold Boolean network simulation algorithm to model signal transduction in a single cell, and then used an ensemble of these models to estimate the aggregate response across a cell population. Using published data, we derived a putative crosstalk network involving growth factors and cytokines - i.e., Epidermal Growth Factor, Insulin, Insulin like Growth Factor Type 1, and Tumor Necrosis Factor α - to describe early signaling events in cell proliferation signal transduction. Reproducibility of the modeling technique across ensembles of Boolean networks representing cell populations is investigated. Furthermore, we compare our simulation results to experimental observations of hepatocytes reported in the literature. Conclusion A systematic analysis of the results following differential stimulation of this model by growth factors and cytokines suggests that: (a using Boolean network ensembles with asynchronous updating provides biologically plausible noisy individual cellular responses with reproducible mean behavior for large cell populations, and (b with sufficient data our model can estimate the response to different concentrations of extracellular ligands. Our

  13. Applications of Graph Spectral Techniques to Water Distribution Network Management

    Directory of Open Access Journals (Sweden)

    Armando di Nardo

    2018-01-01

    Full Text Available Cities depend on multiple heterogeneous, interconnected infrastructures to provide safe water to consumers. Given this complexity, efficient numerical techniques are needed to support optimal control and management of a water distribution network (WDN. This paper introduces a holistic analysis framework to support water utilities on the decision making process for an efficient supply management. The proposal is based on graph spectral techniques that take advantage of eigenvalues and eigenvectors properties of matrices that are associated with graphs. Instances of these matrices are the adjacency matrix and the Laplacian, among others. The interest for this application is to work on a graph that specifically represents a WDN. This is a complex network that is made by nodes corresponding to water sources and consumption points and links corresponding to pipes and valves. The aim is to face new challenges on urban water supply, ranging from computing approximations for network performance assessment to setting device positioning for efficient and automatic WDN division into district metered areas. It is consequently created a novel tool-set of graph spectral techniques adapted to improve main water management tasks and to simplify the identification of water losses through the definition of an optimal network partitioning. Two WDNs are used to analyze the proposed methodology. Firstly, the well-known network of C-Town is investigated for benchmarking of the proposed graph spectral framework. This allows for comparing the obtained results with others coming from previously proposed approaches in literature. The second case-study corresponds to an operational network. It shows the usefulness and optimality of the proposal to effectively manage a WDN.

  14. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  15. Microstructural characterization of materials by neural network technique

    Energy Technology Data Exchange (ETDEWEB)

    Barat, P. [Variable Energy Cyclotron Centre, 1/AF Bidhan Nagar, Kolkata 700064 (India); Chatterjee, A., E-mail: arnomitra@veccal.ernet.i [Variable Energy Cyclotron Centre, 1/AF Bidhan Nagar, Kolkata 700064 (India); Mukherjee, P.; Gayathri, N. [Variable Energy Cyclotron Centre, 1/AF Bidhan Nagar, Kolkata 700064 (India); Jayakumar, T.; Raj, Baldev [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102 (India)

    2010-11-15

    Ultrasonic signals received by pulse echo technique from plane parallel Zircaloy 2 samples of fixed thickness and of three different microstructures, were subjected to signal analysis, as conventional parameters like velocity and attenuation could not reliably discriminate them. The signals, obtained from these samples, were first sampled and digitized. Modified Karhunen Loeve Transform was used to reduce their dimensionality. A multilayered feed forward Artificial Neural Network was trained using a few signals in their reduced domain from the three different microstructures. The rest of the signals from the three samples with different microstructures were classified satisfactorily using this network.

  16. Cooperative Technique Based on Sensor Selection in Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    ISLAM, M. R.

    2009-02-01

    Full Text Available An energy efficient cooperative technique is proposed for the IEEE 1451 based Wireless Sensor Networks. Selected numbers of Wireless Transducer Interface Modules (WTIMs are used to form a Multiple Input Single Output (MISO structure wirelessly connected with a Network Capable Application Processor (NCAP. Energy efficiency and delay of the proposed architecture are derived for different combination of cluster size and selected number of WTIMs. Optimized constellation parameters are used for evaluating derived parameters. The results show that the selected MISO structure outperforms the unselected MISO structure and it shows energy efficient performance than SISO structure after a certain distance.

  17. An analog simulation technique for distributed flow systems

    DEFF Research Database (Denmark)

    Jørgensen, Sten Bay; Kümmel, Mogens

    1973-01-01

    earlier[3]. This is an important extension since flow systems are frequently controlled through manipulation of the flow rate. Previously the tech­nique has been applied with constant flows [4, 5]. Results demonstrating the new hardware are presented from simula­tion of a transportation lag and a double......Simulation of distributed flow systems in chemical engine­ering has been applied more and more during the last decade as computer techniques have developed [l]. The applications have served the purpose of identification of process dynamics and parameter estimation as well as improving process...... and process control design. Although the conventional analog computer has been expanded with hybrid techniques and digital simulation languages have appeared, none of these has demonstrated superiority in simulating distributed flow systems in general [l]. Conventional analog techniques are expensive...

  18. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Jan eHahne

    2015-09-01

    Full Text Available Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy...

  19. Toward Designing a Quantum Key Distribution Network Simulation Model

    Directory of Open Access Journals (Sweden)

    Miralem Mehic

    2016-01-01

    Full Text Available As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several routing protocols in terms of the number of sent routing packets, goodput and Packet Delivery Ratio of data traffic flow using NS-3 simulator.

  20. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  1. Radial basis function (RBF) neural network control for mechanical systems design, analysis and Matlab simulation

    CERN Document Server

    Liu, Jinkun

    2013-01-01

    Radial Basis Function (RBF) Neural Network Control for Mechanical Systems is motivated by the need for systematic design approaches to stable adaptive control system design using neural network approximation-based techniques. The main objectives of the book are to introduce the concrete design methods and MATLAB simulation of stable adaptive RBF neural control strategies. In this book, a broad range of implementable neural network control design methods for mechanical systems are presented, such as robot manipulators, inverted pendulums, single link flexible joint robots, motors, etc. Advanced neural network controller design methods and their stability analysis are explored. The book provides readers with the fundamentals of neural network control system design.   This book is intended for the researchers in the fields of neural adaptive control, mechanical systems, Matlab simulation, engineering design, robotics and automation. Jinkun Liu is a professor at Beijing University of Aeronautics and Astronauti...

  2. Ranking important nodes in complex networks by simulated annealing

    International Nuclear Information System (INIS)

    Sun Yu; Yao Pei-Yang; Shen Jian; Zhong Yun; Wan Lu-Jun

    2017-01-01

    In this paper, based on simulated annealing a new method to rank important nodes in complex networks is presented. First, the concept of an importance sequence (IS) to describe the relative importance of nodes in complex networks is defined. Then, a measure used to evaluate the reasonability of an IS is designed. By comparing an IS and the measure of its reasonability to a state of complex networks and the energy of the state, respectively, the method finds the ground state of complex networks by simulated annealing. In other words, the method can construct a most reasonable IS. The results of experiments on real and artificial networks show that this ranking method not only is effective but also can be applied to different kinds of complex networks. (paper)

  3. Advanced Hydroinformatic Techniques for the Simulation and Analysis of Water Supply and Distribution Systems

    OpenAIRE

    Herrera, Manuel; Meniconi, Silvia; Alvisi, Stefano; Izquierdo, Joaquin

    2018-01-01

    This document is intended to be a presentation of the Special Issue “Advanced Hydroinformatic Techniques for the Simulation and Analysis of Water Supply and Distribution Systems”. The final aim of this Special Issue is to propose a suitable framework supporting insightful hydraulic mechanisms to aid the decision-making processes of water utility managers and practitioners. Its 18 peer-reviewed articles present as varied topics as: water distribution system design, optimization of network perf...

  4. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    Science.gov (United States)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  5. EVALUATING AUSTRALIAN FOOTBALL LEAGUE PLAYER CONTRIBUTIONS USING INTERACTIVE NETWORK SIMULATION

    Directory of Open Access Journals (Sweden)

    Jonathan Sargent

    2013-03-01

    Full Text Available This paper focuses on the contribution of Australian Football League (AFL players to their team's on-field network by simulating player interactions within a chosen team list and estimating the net effect on final score margin. A Visual Basic computer program was written, firstly, to isolate the effective interactions between players from a particular team in all 2011 season matches and, secondly, to generate a symmetric interaction matrix for each match. Negative binomial distributions were fitted to each player pairing in the Geelong Football Club for the 2011 season, enabling an interactive match simulation model given the 22 chosen players. Dynamic player ratings were calculated from the simulated network using eigenvector centrality, a method that recognises and rewards interactions with more prominent players in the team network. The centrality ratings were recorded after every network simulation and then applied in final score margin predictions so that each player's match contribution-and, hence, an optimal team-could be estimated. The paper ultimately demonstrates that the presence of highly rated players, such as Geelong's Jimmy Bartel, provides the most utility within a simulated team network. It is anticipated that these findings will facilitate optimal AFL team selection and player substitutions, which are key areas of interest to coaches. Network simulations are also attractive for use within betting markets, specifically to provide information on the likelihood of a chosen AFL team list "covering the line".

  6. Dynamical graph theory networks techniques for the analysis of sparse connectivity networks in dementia

    Science.gov (United States)

    Tahmassebi, Amirhessam; Pinker-Domenig, Katja; Wengert, Georg; Lobbes, Marc; Stadlbauer, Andreas; Romero, Francisco J.; Morales, Diego P.; Castillo, Encarnacion; Garcia, Antonio; Botella, Guillermo; Meyer-Bäse, Anke

    2017-05-01

    Graph network models in dementia have become an important computational technique in neuroscience to study fundamental organizational principles of brain structure and function of neurodegenerative diseases such as dementia. The graph connectivity is reflected in the connectome, the complete set of structural and functional connections of the graph network, which is mostly based on simple Pearson correlation links. In contrast to simple Pearson correlation networks, the partial correlations (PC) only identify direct correlations while indirect associations are eliminated. In addition to this, the state-of-the-art techniques in brain research are based on static graph theory, which is unable to capture the dynamic behavior of the brain connectivity, as it alters with disease evolution. We propose a new research avenue in neuroimaging connectomics based on combining dynamic graph network theory and modeling strategies at different time scales. We present the theoretical framework for area aggregation and time-scale modeling in brain networks as they pertain to disease evolution in dementia. This novel paradigm is extremely powerful, since we can derive both static parameters pertaining to node and area parameters, as well as dynamic parameters, such as system's eigenvalues. By implementing and analyzing dynamically both disease driven PC-networks and regular concentration networks, we reveal differences in the structure of these network that play an important role in the temporal evolution of this disease. The described research is key to advance biomedical research on novel disease prediction trajectories and dementia therapies.

  7. Optimal deep neural networks for sparse recovery via Laplace techniques

    OpenAIRE

    Limmer, Steffen; Stanczak, Slawomir

    2017-01-01

    This paper introduces Laplace techniques for designing a neural network, with the goal of estimating simplex-constraint sparse vectors from compressed measurements. To this end, we recast the problem of MMSE estimation (w.r.t. a pre-defined uniform input distribution) as the problem of computing the centroid of some polytope that results from the intersection of the simplex and an affine subspace determined by the measurements. Owing to the specific structure, it is shown that the centroid ca...

  8. Network modeling and analysis technique for the evaluation of nuclear safeguards systems effectiveness

    International Nuclear Information System (INIS)

    Grant, F.H. III; Miner, R.J.; Engi, D.

    1978-01-01

    Nuclear safeguards systems are concerned with the physical protection and control of nuclear materials. The Safeguards Network Analysis Procedure (SNAP) provides a convenient and standard analysis methodology for the evaluation of safeguards system effectiveness. This is achieved through a standard set of symbols which characterize the various elements of safeguards systems and an analysis program to execute simulation models built using the SNAP symbology. The reports provided by the SNAP simulation program enable analysts to evaluate existing sites as well as alternative design possibilities. This paper describes the SNAP modeling technique and provides an example illustrating its use

  9. Network modeling and analysis technique for the evaluation of nuclear safeguards systems effectiveness

    International Nuclear Information System (INIS)

    Grant, F.H. III; Miner, R.J.; Engi, D.

    1979-02-01

    Nuclear safeguards systems are concerned with the physical protection and control of nuclear materials. The Safeguards Network Analysis Procedure (SNAP) provides a convenient and standard analysis methodology for the evaluation of safeguards system effectiveness. This is achieved through a standard set of symbols which characterize the various elements of safeguards systems and an analysis program to execute simulation models built using the SNAP symbology. The reports provided by the SNAP simulation program enable analysts to evaluate existing sites as well as alternative design possibilities. This paper describes the SNAP modeling technique and provides an example illustrating its use

  10. Network module detection: Affinity search technique with the multi-node topological overlap measure.

    Science.gov (United States)

    Li, Ai; Horvath, Steve

    2009-07-20

    Many clustering procedures only allow the user to input a pairwise dissimilarity or distance measure between objects. We propose a clustering method that can input a multi-point dissimilarity measure d(i1, i2, ..., iP) where the number of points P can be larger than 2. The work is motivated by gene network analysis where clusters correspond to modules of highly interconnected nodes. Here, we define modules as clusters of network nodes with high multi-node topological overlap. The topological overlap measure is a robust measure of interconnectedness which is based on shared network neighbors. In previous work, we have shown that the multi-node topological overlap measure yields biologically meaningful results when used as input of network neighborhood analysis. We adapt network neighborhood analysis for the use of module detection. We propose the Module Affinity Search Technique (MAST), which is a generalized version of the Cluster Affinity Search Technique (CAST). MAST can accommodate a multi-node dissimilarity measure. Clusters grow around user-defined or automatically chosen seeds (e.g. hub nodes). We propose both local and global cluster growth stopping rules. We use several simulations and a gene co-expression network application to argue that the MAST approach leads to biologically meaningful results. We compare MAST with hierarchical clustering and partitioning around medoid clustering. Our flexible module detection method is implemented in the MTOM software which can be downloaded from the following webpage: http://www.genetics.ucla.edu/labs/horvath/MTOM/

  11. Simulation of wind turbine wakes using the actuator line technique

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming; Henningson, Dan S.

    2015-01-01

    The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance...... predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results...

  12. Mesoscopic simulations of crosslinked polymer networks

    NARCIS (Netherlands)

    Megariotis, G.; Vogiatzis, G.G.; Schneider, L.; Müller, M.; Theodorou, D.N.

    2016-01-01

    A new methodology and the corresponding C++ code for mesoscopic simulations of elastomers are presented. The test system, crosslinked ds-1'4-polyisoprene' is simulated with a Brownian Dynamics/kinetic Monte Carlo algorithm as a dense liquid of soft, coarse-grained beads, each representing 5-10 Kuhn

  13. Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.

    Science.gov (United States)

    Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker

    2009-02-21

    Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.

  14. Optimization of blanking process using neural network simulation

    International Nuclear Information System (INIS)

    Hambli, R.

    2005-01-01

    The present work describes a methodology using the finite element method and neural network simulation in order to predict the optimum punch-die clearance during sheet metal blanking processes. A damage model is used in order to describe crack initiation and propagation into the sheet. The proposed approach combines predictive finite element and neural network modeling of the leading blanking parameters. Numerical results obtained by finite element computation including damage and fracture modeling were utilized to train the developed simulation environment based on back propagation neural network modeling. The comparative study between the numerical results and the experimental ones shows the good agreement. (author)

  15. Simulating individual-based models of epidemics in hierarchical networks

    NARCIS (Netherlands)

    Quax, R.; Bader, D.A.; Sloot, P.M.A.

    2009-01-01

    Current mathematical modeling methods for the spreading of infectious diseases are too simplified and do not scale well. We present the Simulator of Epidemic Evolution in Complex Networks (SEECN), an efficient simulator of detailed individual-based models by parameterizing separate dynamics

  16. Simulation studies of a wide area health care network.

    Science.gov (United States)

    McDaniel, J. G.

    1994-01-01

    There is an increasing number of efforts to install wide area health care networks. Some of these networks are being built to support several applications over a wide user base consisting primarily of medical practices, hospitals, pharmacies, medical laboratories, payors, and suppliers. Although on-line, multi-media telecommunication is desirable for some purposes such as cardiac monitoring, store-and-forward messaging is adequate for many common, high-volume applications. Laboratory test results and payment claims, for example, can be distributed using electronic messaging networks. Several network prototypes have been constructed to determine the technical problems and to assess the effectiveness of electronic messaging in wide area health care networks. Our project, Health Link, developed prototype software that was able to use the public switched telephone network to exchange messages automatically, reliably and securely. The network could be configured to accommodate the many different traffic patterns and cost constraints of its users. Discrete event simulations were performed on several network models. Canonical star and mesh networks, that were composed of nodes operating at steady state under equal loads, were modeled. Both topologies were found to support the throughput of a generic wide area health care network. The mean message delivery time of the mesh network was found to be less than that of the star network. Further simulations were conducted for a realistic large-scale health care network consisting of 1,553 doctors, 26 hospitals, four medical labs, one provincial lab and one insurer. Two network topologies were investigated: one using predominantly peer-to-peer communication, the other using client-server communication.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7949966

  17. A computer code to simulate X-ray imaging techniques

    International Nuclear Information System (INIS)

    Duvauchelle, Philippe; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-01-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests

  18. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-09-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  19. A gene network simulator to assess reverse engineering algorithms.

    Science.gov (United States)

    Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2009-03-01

    In the context of reverse engineering of biological networks, simulators are helpful to test and compare the accuracy of different reverse-engineering approaches in a variety of experimental conditions. A novel gene-network simulator is presented that resembles some of the main features of transcriptional regulatory networks related to topology, interaction among regulators of transcription, and expression dynamics. The simulator generates network topology according to the current knowledge of biological network organization, including scale-free distribution of the connectivity and clustering coefficient independent of the number of nodes in the network. It uses fuzzy logic to represent interactions among the regulators of each gene, integrated with differential equations to generate continuous data, comparable to real data for variety and dynamic complexity. Finally, the simulator accounts for saturation in the response to regulation and transcription activation thresholds and shows robustness to perturbations. It therefore provides a reliable and versatile test bed for reverse engineering algorithms applied to microarray data. Since the simulator describes regulatory interactions and expression dynamics as two distinct, although interconnected aspects of regulation, it can also be used to test reverse engineering approaches that use both microarray and protein-protein interaction data in the process of learning. A first software release is available at http://www.dei.unipd.it/~dicamill/software/netsim as an R programming language package.

  20. NET European Network on Neutron Techniques Standardization for Structural Integrity

    International Nuclear Information System (INIS)

    Youtsos, A.

    2004-01-01

    Improved performance and safety of European energy production systems is essential for providing safe, clean and inexpensive electricity to the citizens of the enlarged EU. The state of the art in assessing internal stresses, micro-structure and defects in welded nuclear components -as well as their evolution due to complex thermo-mechanical loads and irradiation exposure -needs to be improved before relevant structural integrity assessment code requirements can safely become less conservative. This is valid for both experimental characterization techniques and predictive numerical algorithms. In the course of the last two decades neutron methods have proven to be excellent means for providing valuable information required in structural integrity assessment of advanced engineering applications. However, the European industry is hampered from broadly using neutron research due to lack of harmonised and standardized testing methods. 35 European major industrial and research/academic organizations have joined forces, under JRC coordination, to launch the NET European Network on Neutron Techniques Standardization for Structural Integrity in May 2002. The NET collaborative research initiative aims at further development and harmonisation of neutron scattering methods, in support of structural integrity assessment. This is pursued through a number of testing round robin campaigns on neutron diffraction and small angle neutron scattering - SANS and supported by data provided by other more conventional destructive and non-destructive methods, such as X-ray diffraction and deep and surface hole drilling. NET also strives to develop more reliable and harmonized simulation procedures for the prediction of residual stress and damage in steel welded power plant components. This is pursued through a number of computational round robin campaigns based on advanced FEM techniques, and on reliable data obtained by such novel and harmonized experimental methods. The final goal of

  1. Simulation of Radiation Heat Transfer in a VAR Furnace Using an Electrical Resistance Network

    Science.gov (United States)

    Ballantyne, A. Stewart

    The use of electrical resistance networks to simulate heat transfer is a well known analytical technique that greatly simplifies the solution of radiation heat transfer problems. In a VAR furnace, radiative heat transfer occurs between the ingot, electrode, and crucible wall; and the arc when the latter is present during melting. To explore the relative heat exchange between these elements, a resistive network model was developed to simulate the heat exchange between the electrode, ingot, and crucible with and without the presence of an arc. This model was then combined with an ingot model to simulate the VAR process and permit a comparison between calculated and observed results during steady state melting. Results from simulations of a variety of alloys of different sizes have demonstrated the validity of the model. Subsequent simulations demonstrate the application of the model to the optimization of both steady state and hot top melt practices, and raises questions concerning heat flux assumptions at the ingot top surface.

  2. Computational Aspects of Sensor Network Protocols (Distributed Sensor Network Simulator

    Directory of Open Access Journals (Sweden)

    Vasanth Iyer

    2009-08-01

    Full Text Available In this work, we model the sensor networks as an unsupervised learning and clustering process. We classify nodes according to its static distribution to form known class densities (CCPD. These densities are chosen from specific cross-layer features which maximizes lifetime of power-aware routing algorithms. To circumvent computational complexities of a power-ware communication STACK we introduce path-loss models at the nodes only for high density deployments. We study the cluster heads and formulate the data handling capacity for an expected deployment and use localized probability models to fuse the data with its side information before transmission. So each cluster head has a unique Pmax but not all cluster heads have the same measured value. In a lossless mode if there are no faults in the sensor network then we can show that the highest probability given by Pmax is ambiguous if its frequency is ≤ n/2 otherwise it can be determined by a local function. We further show that the event detection at the cluster heads can be modelled with a pattern 2m and m, the number of bits can be a correlated pattern of 2 bits and for a tight lower bound we use 3-bit Huffman codes which have entropy < 1. These local algorithms are further studied to optimize on power, fault detection and to maximize on the distributed routing algorithm used at the higher layers. From these bounds in large network, it is observed that the power dissipation is network size invariant. The performance of the routing algorithms solely based on success of finding healthy nodes in a large distribution. It is also observed that if the network size is kept constant and the density of the nodes is kept closer then the local pathloss model effects the performance of the routing algorithms. We also obtain the maximum intensity of transmitting nodes for a given category of routing algorithms for an outage constraint, i.e., the lifetime of sensor network.

  3. Evaluation of artificial neural network techniques for flow forecasting in the River Yangtze, China

    Directory of Open Access Journals (Sweden)

    C. W. Dawson

    2002-01-01

    Full Text Available While engineers have been quantifying rainfall-runoff processes since the mid-19th century, it is only in the last decade that artificial neural network models have been applied to the same task. This paper evaluates two neural networks in this context: the popular multilayer perceptron (MLP, and the radial basis function network (RBF. Using six-hourly rainfall-runoff data for the River Yangtze at Yichang (upstream of the Three Gorges Dam for the period 1991 to 1993, it is shown that both neural network types can simulate river flows beyond the range of the training set. In addition, an evaluation of alternative RBF transfer functions demonstrates that the popular Gaussian function, often used in RBF networks, is not necessarily the ‘best’ function to use for river flow forecasting. Comparisons are also made between these neural networks and conventional statistical techniques; stepwise multiple linear regression, auto regressive moving average models and a zero order forecasting approach. Keywords: Artificial neural network, multilayer perception, radial basis function, flood forecasting

  4. Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design

    Science.gov (United States)

    Ang, Chee Siang; Zaphiris, Panayiotis

    We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.

  5. Distributed dynamic simulations of networked control and building performance applications.

    Science.gov (United States)

    Yahiaoui, Azzedine

    2018-02-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.

  6. A SIMULATION OF THE PENICILLIN G PRODUCTION BIOPROCESS APPLYING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A.J.G. da Cruz

    1997-12-01

    Full Text Available The production of penicillin G by Penicillium chrysogenum IFO 8644 was simulated employing a feedforward neural network with three layers. The neural network training procedure used an algorithm combining two procedures: random search and backpropagation. The results of this approach were very promising, and it was observed that the neural network was able to accurately describe the nonlinear behavior of the process. Besides, the results showed that this technique can be successfully applied to control process algorithms due to its long processing time and its flexibility in the incorporation of new data

  7. Modeling and Simulation Network Data Standards

    Science.gov (United States)

    2011-09-30

    approaches . 2.3. JNAT. JNAT is a Web application that provides connectivity and network analysis capability. JNAT uses propagation models and low-fidelity...COMBATXXI Movement Logger Data Output Dictionary. Field # Geocentric Coordinates (GCC) Heading Geodetic Coordinates (GDC) Heading Universal...B-8 Field # Geocentric Coordinates (GCC) Heading Geodetic Coordinates (GDC) Heading Universal Transverse Mercator (UTM) Heading

  8. Adaptive Importance Sampling Simulation of Queueing Networks

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Nicola, V.F.; Rubinstein, N.; Rubinstein, Reuven Y.

    2000-01-01

    In this paper, a method is presented for the efficient estimation of rare-event (overflow) probabilities in Jackson queueing networks using importance sampling. The method differs in two ways from methods discussed in most earlier literature: the change of measure is state-dependent, i.e., it is a

  9. Space Geodetic Technique Co-location in Space: Simulation Results for the GRASP Mission

    Science.gov (United States)

    Kuzmicz-Cieslak, M.; Pavlis, E. C.

    2011-12-01

    The Global Geodetic Observing System-GGOS, places very stringent requirements in the accuracy and stability of future realizations of the International Terrestrial Reference Frame (ITRF): an origin definition at 1 mm or better at epoch and a temporal stability on the order of 0.1 mm/y, with similar numbers for the scale (0.1 ppb) and orientation components. These goals were derived from the requirements of Earth science problems that are currently the international community's highest priority. None of the geodetic positioning techniques can achieve this goal alone. This is due in part to the non-observability of certain attributes from a single technique. Another limitation is imposed from the extent and uniformity of the tracking network and the schedule of observational availability and number of suitable targets. The final limitation derives from the difficulty to "tie" the reference points of each technique at the same site, to an accuracy that will support the GGOS goals. The future GGOS network will address decisively the ground segment and to certain extent the space segment requirements. The JPL-proposed multi-technique mission GRASP (Geodetic Reference Antenna in Space) attempts to resolve the accurate tie between techniques, using their co-location in space, onboard a well-designed spacecraft equipped with GNSS receivers, a SLR retroreflector array, a VLBI beacon and a DORIS system. Using the anticipated system performance for all four techniques at the time the GGOS network is completed (ca 2020), we generated a number of simulated data sets for the development of a TRF. Our simulation studies examine the degree to which GRASP can improve the inter-technique "tie" issue compared to the classical approach, and the likely modus operandi for such a mission. The success of the examined scenarios is judged by the quality of the origin and scale definition of the resulting TRF.

  10. Performance evaluation of an importance sampling technique in a Jackson network

    Science.gov (United States)

    brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed

    2014-03-01

    Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.

  11. Simulating activation propagation in social networks using the graph theory

    Directory of Open Access Journals (Sweden)

    František Dařena

    2010-01-01

    Full Text Available The social-network formation and analysis is nowadays one of objects that are in a focus of intensive research. The objective of the paper is to suggest the perspective of representing social networks as graphs, with the application of the graph theory to problems connected with studying the network-like structures and to study spreading activation algorithm for reasons of analyzing these structures. The paper presents the process of modeling multidimensional networks by means of directed graphs with several characteristics. The paper also demonstrates using Spreading Activation algorithm as a good method for analyzing multidimensional network with the main focus on recommender systems. The experiments showed that the choice of parameters of the algorithm is crucial, that some kind of constraint should be included and that the algorithm is able to provide a stable environment for simulations with networks.

  12. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System......The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: 1) Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. 2) Amongst numerous training algorithms, only the Recursive Prediction Error Method using...

  13. Artificial neural network simulation of battery performance

    Energy Technology Data Exchange (ETDEWEB)

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  14. Structural investigation and simulation of acoustic properties of some tellurite glasses using artificial intelligence technique

    International Nuclear Information System (INIS)

    Gaafar, M.S.; Abdeen, Mostafa A.M.; Marzouk, S.Y.

    2011-01-01

    Research highlights: → Simulation the acoustic properties of some tellurite glasses using one of the artificial intelligence techniques (artificial neural network). → The glass network is strengthened by enhancing the linkage of Te-O chains. The tellurite network will also come to homogenization, because of uniform distribution of Nb 5+ ions among the Te-O chains, though some of the tellurium-oxide polyhedra still link each other in edge sharing. → Excellent agreements between the measured values and the predicted values were obtained for over 50 different tellurite glass compositions. → The model we designed gives a better agreement as compared with Makishima and Machenzie model. - Abstract: The developments in the field of industry raise the need for simulating the acoustic properties of glass materials before melting raw material oxides. In this paper, we are trying to simulate the acoustic properties of some tellurite glasses using one of the artificial intelligence techniques (artificial neural network). The artificial neural network (ANN) technique is introduced in the current study to simulate and predict important parameters such as density, longitudinal and shear ultrasonic velocities and elastic moduli (longitudinal and shear moduli). The ANN results were found to be in successful good agreement with those experimentally measured parameters. Then the presented ANN model is used to predict the acoustic properties of some new tellurite glasses. For this purpose, four glass systems xNb 2 O 5 -(1 - x)TeO 2 , 0.1PbO-xNb 2 O 5 -(0.9 - x)TeO 2 , 0.2PbO-xNb 2 O 5 -(0.8 - x)TeO 2 and 0.05Bi 2 O 3 -xNb 2 O 5 -(0.95 - x)TeO 2 were prepared using melt quenching technique. The results of ultrasonic velocities and elastic moduli showed that the addition of Nb 2 O 5 as a network modifier provides oxygen ions to change [TeO 4 ] tbps into [TeO 3 ] tps.

  15. Structural investigation and simulation of acoustic properties of some tellurite glasses using artificial intelligence technique

    Energy Technology Data Exchange (ETDEWEB)

    Gaafar, M.S., E-mail: mohamed_s_gaafar@hotmail.com [Ultrasonic Department, National Institute for Standards, Giza (Egypt); Physics Department, Faculty of Science, Majmaah University, Zulfi (Saudi Arabia); Abdeen, Mostafa A.M., E-mail: mostafa_a_m_abdeen@hotmail.com [Dept. of Eng. Math. and Physics, Faculty of Eng., Cairo University, Giza (Egypt); Marzouk, S.Y., E-mail: samir_marzouk2001@yahoo.com [Arab Academy of Science and Technology, Al-Horria, Heliopolis, Cairo (Egypt)

    2011-02-24

    Research highlights: > Simulation the acoustic properties of some tellurite glasses using one of the artificial intelligence techniques (artificial neural network). > The glass network is strengthened by enhancing the linkage of Te-O chains. The tellurite network will also come to homogenization, because of uniform distribution of Nb{sup 5+} ions among the Te-O chains, though some of the tellurium-oxide polyhedra still link each other in edge sharing. > Excellent agreements between the measured values and the predicted values were obtained for over 50 different tellurite glass compositions. > The model we designed gives a better agreement as compared with Makishima and Machenzie model. - Abstract: The developments in the field of industry raise the need for simulating the acoustic properties of glass materials before melting raw material oxides. In this paper, we are trying to simulate the acoustic properties of some tellurite glasses using one of the artificial intelligence techniques (artificial neural network). The artificial neural network (ANN) technique is introduced in the current study to simulate and predict important parameters such as density, longitudinal and shear ultrasonic velocities and elastic moduli (longitudinal and shear moduli). The ANN results were found to be in successful good agreement with those experimentally measured parameters. Then the presented ANN model is used to predict the acoustic properties of some new tellurite glasses. For this purpose, four glass systems xNb{sub 2}O{sub 5}-(1 - x)TeO{sub 2}, 0.1PbO-xNb{sub 2}O{sub 5}-(0.9 - x)TeO{sub 2}, 0.2PbO-xNb{sub 2}O{sub 5}-(0.8 - x)TeO{sub 2} and 0.05Bi{sub 2}O{sub 3}-xNb{sub 2}O{sub 5}-(0.95 - x)TeO{sub 2} were prepared using melt quenching technique. The results of ultrasonic velocities and elastic moduli showed that the addition of Nb{sub 2}O{sub 5} as a network modifier provides oxygen ions to change [TeO{sub 4}] tbps into [TeO{sub 3}] tps.

  16. Improved Image Encryption for Real-Time Application over Wireless Communication Networks using Hybrid Cryptography Technique

    Directory of Open Access Journals (Sweden)

    Kazeem B. Adedeji

    2016-12-01

    Full Text Available Advances in communication networks have enabled organization to send confidential data such as digital images over wireless networks. However, the broadcast nature of wireless communication channel has made it vulnerable to attack from eavesdroppers. We have developed a hybrid cryptography technique, and we present its application to digital images as a means of improving the security of digital image for transmission over wireless communication networks. The hybrid technique uses a combination of a symmetric (Data Encryption Standard and asymmetric (Rivest Shamir Adleman cryptographic algorithms to secure data to be transmitted between different nodes of a wireless network. Three different image samples of type jpeg, png and jpg were tested using this technique. The results obtained showed that the hybrid system encrypt the images with minimal simulation time, and high throughput. More importantly, there is no relation or information between the original images and their encrypted form, according to Shannon’s definition of perfect security, thereby making the system much more secure.

  17. Next-Generation Environment-Aware Cellular Networks: Modern Green Techniques and Implementation Challenges

    KAUST Repository

    Ghazzai, Hakim

    2016-09-16

    Over the last decade, mobile communications have been witnessing a noteworthy increase of data traffic demand that is causing an enormous energy consumption in cellular networks. The reduction of their fossil fuel consumption in addition to the huge energy bills paid by mobile operators is considered as the most important challenges for the next-generation cellular networks. Although most of the proposed studies were focusing on individual physical layer power optimizations, there is a growing necessity to meet the green objective of fifth-generation cellular networks while respecting the user\\'s quality of service. This paper investigates four important techniques that could be exploited separately or together in order to enable wireless operators achieve significant economic benefits and environmental savings: 1) the base station sleeping strategy; 2) the optimized energy procurement from the smart grid; 3) the base station energy sharing; and 4) the green networking collaboration between competitive mobile operators. The presented simulation results measure the gain that could be obtained using these techniques compared with that of traditional scenarios. Finally, this paper discusses the issues and challenges related to the implementations of these techniques in real environments. © 2016 IEEE.

  18. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Vega C, H. R.; Gallego D, E.; Lorente F, A.; Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E.

    2011-01-01

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  19. A neural network technique for remeshing of bone microstructure.

    Science.gov (United States)

    Fischer, Anath; Holdstein, Yaron

    2012-01-01

    Today, there is major interest within the biomedical community in developing accurate noninvasive means for the evaluation of bone microstructure and bone quality. Recent improvements in 3D imaging technology, among them development of micro-CT and micro-MRI scanners, allow in-vivo 3D high-resolution scanning and reconstruction of large specimens or even whole bone models. Thus, the tendency today is to evaluate bone features using 3D assessment techniques rather than traditional 2D methods. For this purpose, high-quality meshing methods are required. However, the 3D meshes produced from current commercial systems usually are of low quality with respect to analysis and rapid prototyping. 3D model reconstruction of bone is difficult due to the complexity of bone microstructure. The small bone features lead to a great deal of neighborhood ambiguity near each vertex. The relatively new neural network method for mesh reconstruction has the potential to create or remesh 3D models accurately and quickly. A neural network (NN), which resembles an artificial intelligence (AI) algorithm, is a set of interconnected neurons, where each neuron is capable of making an autonomous arithmetic calculation. Moreover, each neuron is affected by its surrounding neurons through the structure of the network. This paper proposes an extension of the growing neural gas (GNN) neural network technique for remeshing a triangular manifold mesh that represents bone microstructure. This method has the advantage of reconstructing the surface of a genus-n freeform object without a priori knowledge regarding the original object, its topology, or its shape.

  20. Energy neutral protocol based on hierarchical routing techniques for energy harvesting wireless sensor network

    Science.gov (United States)

    Muhammad, Umar B.; Ezugwu, Absalom E.; Ofem, Paulinus O.; Rajamäki, Jyri; Aderemi, Adewumi O.

    2017-06-01

    Recently, researchers in the field of wireless sensor networks have resorted to energy harvesting techniques that allows energy to be harvested from the ambient environment to power sensor nodes. Using such Energy harvesting techniques together with proper routing protocols, an Energy Neutral state can be achieved so that sensor nodes can run perpetually. In this paper, we propose an Energy Neutral LEACH routing protocol which is an extension to the traditional LEACH protocol. The goal of the proposed protocol is to use Gateway node in each cluster so as to reduce the data transmission ranges of cluster head nodes. Simulation results show that the proposed routing protocol achieves a higher throughput and ensure the energy neutral status of the entire network.

  1. Application of the PRBS/FFT technique to digital simulations

    International Nuclear Information System (INIS)

    Hinds, H.W.

    1977-01-01

    This paper describes a method for obtaining a small-signal frequency response from a digital dynamic simulation. It employs a modified form of the PRBS/FFT technique, whereby a system is perturbed by a pseudo-random binary sequence and its response is analyzed using a fast Fourier transform-based program. Two applications of the technique are described; one involves a set of two coupled, second-order, ordinary differential equations; the other is a set of non-linear partial differential equations describing the thermohydraulic behaviour of water boiling in a fuel channel. (author)

  2. Dynamic Interactions for Network Visualization and Simulation

    Science.gov (United States)

    2009-03-01

    projects.htm, Site accessed January 5, 2009. 12. John S. Weir, Major, USAF, Mediated User-Simulator Interactive Command with Visualization ( MUSIC -V). Master’s...Computing Sciences in Colleges, December 2005). 14. Enrique Campos -Nanez, “nscript user manual,” Department of System Engineer- ing University of

  3. Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.

    Science.gov (United States)

    Wang, Zhijun; Mirdamadi, Reza; Wang, Qing

    2016-01-01

    Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.

  4. Display techniques for dynamic network data in transportation GIS

    Energy Technology Data Exchange (ETDEWEB)

    Ganter, J.H.; Cashwell, J.W.

    1994-05-01

    Interest in the characteristics of urban street networks is increasing at the same time new monitoring technologies are delivering detailed traffic data. These emerging streams of data may lead to the dilemma that airborne remote sensing has faced: how to select and access the data, and what meaning is hidden in them? computer-assisted visualization techniques are needed to portray these dynamic data. Of equal importance are controls that let the user filter, symbolize, and replay the data to reveal patterns and trends over varying time spans. We discuss a prototype software system that addresses these requirements.

  5. Promoting Simulation Globally: Networking with Nursing Colleagues Across Five Continents.

    Science.gov (United States)

    Alfes, Celeste M; Madigan, Elizabeth A

    Simulation education is gaining momentum internationally and may provide the opportunity to enhance clinical education while disseminating evidence-based practice standards for clinical simulation and learning. There is a need to develop a cohesive leadership group that fosters support, networking, and sharing of simulation resources globally. The Frances Payne Bolton School of Nursing at Case Western Reserve University has had the unique opportunity to establish academic exchange programs with schools of nursing across five continents. Although the joint and mutual simulation activities have been extensive, each international collaboration has also provided insight into the innovations developed by global partners.

  6. HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks

    Directory of Open Access Journals (Sweden)

    Luca Marchetti

    2017-01-01

    Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.

  7. Network bursts in cortical neuronal cultures: 'noise - versus pacemaker'- driven neural network simulations

    NARCIS (Netherlands)

    Gritsun, T.; Stegenga, J.; le Feber, Jakob; Rutten, Wim

    2009-01-01

    In this paper we address the issue of spontaneous bursting activity in cortical neuronal cultures and explain what might cause this collective behavior using computer simulations of two different neural network models. While the common approach to acivate a passive network is done by introducing

  8. Advancing botnet modeling techniques for military and security simulations

    Science.gov (United States)

    Banks, Sheila B.; Stytz, Martin R.

    2011-06-01

    Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.

  9. SELANSI: a toolbox for simulation of stochastic gene regulatory networks.

    Science.gov (United States)

    Pájaro, Manuel; Otero-Muras, Irene; Vázquez, Carlos; Alonso, Antonio A

    2018-03-01

    Gene regulation is inherently stochastic. In many applications concerning Systems and Synthetic Biology such as the reverse engineering and the de novo design of genetic circuits, stochastic effects (yet potentially crucial) are often neglected due to the high computational cost of stochastic simulations. With advances in these fields there is an increasing need of tools providing accurate approximations of the stochastic dynamics of gene regulatory networks (GRNs) with reduced computational effort. This work presents SELANSI (SEmi-LAgrangian SImulation of GRNs), a software toolbox for the simulation of stochastic multidimensional gene regulatory networks. SELANSI exploits intrinsic structural properties of gene regulatory networks to accurately approximate the corresponding Chemical Master Equation with a partial integral differential equation that is solved by a semi-lagrangian method with high efficiency. Networks under consideration might involve multiple genes with self and cross regulations, in which genes can be regulated by different transcription factors. Moreover, the validity of the method is not restricted to a particular type of kinetics. The tool offers total flexibility regarding network topology, kinetics and parameterization, as well as simulation options. SELANSI runs under the MATLAB environment, and is available under GPLv3 license at https://sites.google.com/view/selansi. antonio@iim.csic.es. © The Author(s) 2017. Published by Oxford University Press.

  10. Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems.

    Energy Technology Data Exchange (ETDEWEB)

    Burton, David P.; Van Leeuwen, Brian P.; McDonald, Michael James; Onunkwo, Uzoma A.; Tarman, Thomas David; Urias, Vincent E.

    2009-09-01

    This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

  11. Simulation Of Wireless Networked Control System Using TRUETIME And MATLAB

    Directory of Open Access Journals (Sweden)

    Nyan Phyo Aung

    2015-08-01

    Full Text Available Wireless networked control systems WNCS are attracting an increasing research interests in the past decade. Wireless networked control system WNCS is composed of a group of distributed sensors and actuators that communicate through wireless link which achieves distributed sensing and executing tasks. This is particularly relevant for the areas of communication control and computing where successful design of WNCS brings about new challenges to the researchers. The primary motivation of this survey paper is to examine the design issues and to provide directions for successful simulation and implementation of WNCS. The paper also as well reviews some simulation tools for such systems.

  12. Simulation of nonlinear random vibrations using artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Paez, T.L.; Tucker, S.; O`Gorman, C.

    1997-02-01

    The simulation of mechanical system random vibrations is important in structural dynamics, but it is particularly difficult when the system under consideration is nonlinear. Artificial neural networks provide a useful tool for the modeling of nonlinear systems, however, such modeling may be inefficient or insufficiently accurate when the system under consideration is complex. This paper shows that there are several transformations that can be used to uncouple and simplify the components of motion of a complex nonlinear system, thereby making its modeling and random vibration simulation, via component modeling with artificial neural networks, a much simpler problem. A numerical example is presented.

  13. Social Network Mixing Patterns In Mergers & Acquisitions - A Simulation Experiment

    Directory of Open Access Journals (Sweden)

    Robert Fabac

    2011-01-01

    Full Text Available In the contemporary world of global business and continuously growing competition, organizations tend to use mergers and acquisitions to enforce their position on the market. The future organization’s design is a critical success factor in such undertakings. The field of social network analysis can enhance our uderstanding of these processes as it lets us reason about the development of networks, regardless of their origin. The analysis of mixing patterns is particularly useful as it provides an insight into how nodes in a network connect with each other. We hypothesize that organizational networks with compatible mixing patterns will be integrated more successfully. After conducting a simulation experiment, we suggest an integration model based on the analysis of network assortativity. The model can be a guideline for organizational integration, such as occurs in mergers and acquisitions.

  14. Application of simulation techniques in the probabilistic fracture mechanics

    International Nuclear Information System (INIS)

    De Ruyter van Steveninck, J.L.

    1995-03-01

    The Monte Carlo simulation is applied on a model of the fracture mechanics in order to assess the applicability of this simulation technique in the probabilistic fracture mechanics. By means of the fracture mechanics model the brittle fracture of a steel container or pipe with defects can be predicted. By means of the Monte Carlo simulation also the uncertainty regarding failures can be determined. Based on the variations in the toughness of the fracture and the defect dimensions the distribution of the chance of failure is determined. Also attention is paid to the impact of dependency between uncertain variables. Furthermore, the influence of the applied distributions of the uncertain variables and non-destructive survey on the chance of failure is analyzed. The Monte Carlo simulation results agree quite well with the results of other methods from the probabilistic fracture mechanics. If an analytic expression can be found for the chance of failure, it is possible to determine the variation of the chance of failure, next to an estimation of the chance of failure. It also appears that the dependency between the uncertain variables has a large impact on the chance of failure. It is also concluded from the simulation that the chance of failure strongly depends on the crack depth, and therefore of the distribution of the crack depth. 15 figs., 7 tabs., 12 refs

  15. Simulated annealing for tensor network states

    International Nuclear Information System (INIS)

    Iblisdir, S

    2014-01-01

    Markov chains for probability distributions related to matrix product states and one-dimensional Hamiltonians are introduced. With appropriate ‘inverse temperature’ schedules, these chains can be combined into a simulated annealing scheme for ground states of such Hamiltonians. Numerical experiments suggest that a linear, i.e., fast, schedule is possible in non-trivial cases. A natural extension of these chains to two-dimensional settings is next presented and tested. The obtained results compare well with Euclidean evolution. The proposed Markov chains are easy to implement and are inherently sign problem free (even for fermionic degrees of freedom). (paper)

  16. Transforming network simulation data to semantic data for network attack planning

    CSIR Research Space (South Africa)

    Chan, Ke Fai Peter

    2017-03-01

    Full Text Available study was performed, using the Common Open Research Emulator (CORE), to generate the necessary network simulation data. The simulation data was analysed, and then transformed into linked data. The result of the transformation is a data file that adheres...

  17. Computer simulation of randomly cross-linked polymer networks

    International Nuclear Information System (INIS)

    Williams, Timothy Philip

    2002-01-01

    In this work, Monte Carlo and Stochastic Dynamics computer simulations of mesoscale model randomly cross-linked networks were undertaken. Task parallel implementations of the lattice Monte Carlo Bond Fluctuation model and Kremer-Grest Stochastic Dynamics bead-spring continuum model were designed and used for this purpose. Lattice and continuum precursor melt systems were prepared and then cross-linked to varying degrees. The resultant networks were used to study structural changes during deformation and relaxation dynamics. The effects of a random network topology featuring a polydisperse distribution of strand lengths and an abundance of pendant chain ends, were qualitatively compared to recent published work. A preliminary investigation into the effects of temperature on the structural and dynamical properties was also undertaken. Structural changes during isotropic swelling and uniaxial deformation, revealed a pronounced non-affine deformation dependant on the degree of cross-linking. Fractal heterogeneities were observed in the swollen model networks and were analysed by considering constituent substructures of varying size. The network connectivity determined the length scales at which the majority of the substructure unfolding process occurred. Simulated stress-strain curves and diffraction patterns for uniaxially deformed swollen networks, were found to be consistent with experimental findings. Analysis of the relaxation dynamics of various network components revealed a dramatic slowdown due to the network connectivity. The cross-link junction spatial fluctuations for networks close to the sol-gel threshold, were observed to be at least comparable with the phantom network prediction. The dangling chain ends were found to display the largest characteristic relaxation time. (author)

  18. Nuclear fuel cycle cost analysis using a probabilistic simulation technique

    International Nuclear Information System (INIS)

    Won, Il Ko; Jong, Won Choi; Chul, Hyung Kang; Jae, Sol Lee; Kun, Jai Lee

    1998-01-01

    A simple approach was described to incorporate the Monte Carlo simulation technique into a fuel cycle cost estimate. As a case study, the once-through and recycle fuel cycle options were tested with some alternatives (ie. the change of distribution type for input parameters), and the simulation results were compared with the values calculated by a deterministic method. A three-estimate approach was used for converting cost inputs into the statistical parameters of assumed probabilistic distributions. It was indicated that the Monte Carlo simulation by a Latin Hypercube Sampling technique and subsequent sensitivity analyses were useful for examining uncertainty propagation of fuel cycle costs, and could more efficiently provide information to decisions makers than a deterministic method. It was shown from the change of distribution types of input parameters that the values calculated by the deterministic method were set around a 40 th ∼ 50 th percentile of the output distribution function calculated by probabilistic simulation. Assuming lognormal distribution of inputs, however, the values calculated by the deterministic method were set around an 85 th percentile of the output distribution function calculated by probabilistic simulation. It was also indicated from the results of the sensitivity analysis that the front-end components were generally more sensitive than the back-end components, of which the uranium purchase cost was the most important factor of all. It showed, also, that the discount rate made many contributions to the fuel cycle cost, showing the rank of third or fifth of all components. The results of this study could be useful in applications to another options, such as the Dcp (Direct Use of PWR spent fuel In Candu reactors) cycle with high cost uncertainty

  19. Aggregated Representation of Distribution Networks for Large-Scale Transmission Network Simulations

    DEFF Research Database (Denmark)

    Göksu, Ömer; Altin, Müfit; Sørensen, Poul Ejnar

    2014-01-01

    As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include...... the distributed generation within those analysis. In this paper a practical methodology to obtain aggregated behaviour of the distributed generation is proposed. The methodology, which is based on the use of the IEC standard wind turbine models, is applied on a benchmark distribution network via simulations....

  20. Distributed Synchronization Technique for OFDMA-Based Wireless Mesh Networks Using a Bio-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    Mi Jeong Kim

    2015-07-01

    Full Text Available In this paper, a distributed synchronization technique based on a bio-inspired algorithm is proposed for an orthogonal frequency division multiple access (OFDMA-based wireless mesh network (WMN with a time difference of arrival. The proposed time- and frequency-synchronization technique uses only the signals received from the neighbor nodes, by considering the effect of the propagation delay between the nodes. It achieves a fast synchronization with a relatively low computational complexity because it is operated in a distributed manner, not requiring any feedback channel for the compensation of the propagation delays. In addition, a self-organization scheme that can be effectively used to construct 1-hop neighbor nodes is proposed for an OFDMA-based WMN with a large number of nodes. The performance of the proposed technique is evaluated with regard to the convergence property and synchronization success probability using a computer simulation.

  1. Distributed Synchronization Technique for OFDMA-Based Wireless Mesh Networks Using a Bio-Inspired Algorithm.

    Science.gov (United States)

    Kim, Mi Jeong; Maeng, Sung Joon; Cho, Yong Soo

    2015-07-28

    In this paper, a distributed synchronization technique based on a bio-inspired algorithm is proposed for an orthogonal frequency division multiple access (OFDMA)-based wireless mesh network (WMN) with a time difference of arrival. The proposed time- and frequency-synchronization technique uses only the signals received from the neighbor nodes, by considering the effect of the propagation delay between the nodes. It achieves a fast synchronization with a relatively low computational complexity because it is operated in a distributed manner, not requiring any feedback channel for the compensation of the propagation delays. In addition, a self-organization scheme that can be effectively used to construct 1-hop neighbor nodes is proposed for an OFDMA-based WMN with a large number of nodes. The performance of the proposed technique is evaluated with regard to the convergence property and synchronization success probability using a computer simulation.

  2. D Digital Simulation of Minnan Temple Architecture CAISSON'S Craft Techniques

    Science.gov (United States)

    Lin, Y. C.; Wu, T. C.; Hsu, M. F.

    2013-07-01

    Caisson is one of the important representations of the Minnan (southern Fujian) temple architecture craft techniques and decorative aesthetics. The special component design and group building method present the architectural thinking and personal characteristics of great carpenters of Minnan temple architecture. In late Qing Dynasty, the appearance and style of caissons of famous temples in Taiwan apparently presented the building techniques of the great carpenters. However, as the years went by, the caisson design and craft techniques were not fully inherited, which has been a great loss of cultural assets. Accordingly, with the caisson of Fulong temple, a work by the well-known great carpenter in Tainan as an example, this study obtained the thinking principles of the original design and the design method at initial period of construction through interview records and the step of redrawing the "Tng-Ko" (traditional design, stakeout and construction tool). We obtained the 3D point cloud model of the caisson of Fulong temple using 3D laser scanning technology, and established the 3D digital model of each component of the caisson. Based on the caisson component procedure obtained from interview records, this study conducted the digital simulation of the caisson component to completely recode and present the caisson design, construction and completion procedure. This model of preserving the craft techniques for Minnan temple caisson by using digital technology makes specific contribution to the heritage of the craft techniques while providing an important reference for the digital preservation of human cultural assets.

  3. Technique for in situ leach simulation of uranium ores

    International Nuclear Information System (INIS)

    Grant, D.C.; Seidel, D.C.; Nichols, I.L.

    1985-01-01

    In situ uranium mining offers the advantages of minimal environmental disturbance, low capital and operating costs, and reduced mining development time. It is becoming an increasingly attractive mining method for the recovery of uranium from secondary ore deposits. In order to better understand the process, a laboratory technique was developed and used to study and simulate both the chemical and physical phenomena occurring in ore bodies during in situ leaching. The laboratory simulation technique has been used to determine effects of leaching variables on permeability, uranium recovery, and post-leach aquifer restoration. This report describes the simulation system and testing procedure in sufficient detail to allow the construction of the system, and to perform the desired leaching tests. With construction of such a system, in situ leaching of a given ore using various leach conditions can be evaluated relatively rapidly in the laboratory. Not only could optimum leach conditions be selected for existing ore bodies, but also exploitation of new ore bodies could be accelerated. 8 references, 8 figures, 2 tables

  4. Numerical simulation for gas-liquid two-phase flow in pipe networks

    International Nuclear Information System (INIS)

    Li Xiaoyan; Kuang Bo; Zhou Guoliang; Xu Jijun

    1998-01-01

    The complex pipe network characters can not directly presented in single phase flow, gas-liquid two phase flow pressure drop and void rate change model. Apply fluid network theory and computer numerical simulation technology to phase flow pipe networks carried out simulate and compute. Simulate result shows that flow resistance distribution is non-linear in two phase pipe network

  5. Dynamic simulation of a steam generator by neural networks

    International Nuclear Information System (INIS)

    Masini, R.; Padovani, E.; Ricotti, M.E.; Zio, E.

    1999-01-01

    Numerical simulation by computers of the dynamic evolution of complex systems and components is a fundamental phase of any modern engineering design activity. This is of particular importance for risk-based design projects which require that the system behavior be analyzed under several and often extreme conditions. The traditional methods of simulation typically entail long, iterative, processes which lead to large simulation times, often exceeding the transients real time. Artificial neural networks (ANNs) may be exploited in this context, their advantages residing mainly in the speed of computation, in the capability of generalizing from few examples, in the robustness to noisy and partially incomplete data and in the capability of performing empirical input-output mapping without complete knowledge of the underlying physics. In this paper we present a novel approach to dynamic simulation by ANNs based on a superposition scheme in which a set of networks are individually trained, each one to respond to a different input forcing function. The dynamic simulation of a steam generator is considered as an example to show the potentialities of this tool and to point out the difficulties and crucial issues which typically arise when attempting to establish an efficient neural network simulator. The structure of the networks system is such to feedback, at each time step, a portion of the past evolution of the transient and this allows a good reproduction of also non-linear dynamic behaviors. A nice characteristic of the approach is that the modularization of the training reduces substantially its burden and gives this neural simulation tool a nice feature of transportability. (orig.)

  6. Method of construction of rational corporate network using the simulation model

    Directory of Open Access Journals (Sweden)

    V.N. Pakhomovа

    2013-06-01

    Full Text Available Purpose. Search for new options of the transition from Ethernet technology. Methodology. Physical structuring of the Fast Ethernet network based on hubs and logical structuring of Fast Ethernet network using commutators. Organization of VLAN based on ports grouping and in accordance with the standard IEEE 802 .1Q. Findings. The options for improving of the Ethernet network are proposed. According to the Fast Ethernet and VLAN technologies on the simulation models in packages NetCraker and Cisco Packet Traker respectively. Origiality. The technique of designing of local area network using the VLAN technology is proposed. Practical value.Each of the options of "Dniprozaliznychproekt" network improving has its advantages. Transition from the Ethernet to Fast Ethernet technology is simple and economical, it requires only one commutator, when the VLAN organization requires at least two. VLAN technology, however, has the following advantages: reducing the load on the network, isolation of the broadcast traffic, change of the logical network structure without changing its physical structure, improving the network security. The transition from Ethernet to the VLAN technology allows you to separate the physical topology from the logical one, and the format of the ÌEEE 802.1Q standard frames allows you to simplify the process of virtual networks implementation to enterprises.

  7. Fracture Network Modeling and GoldSim Simulation Support

    OpenAIRE

    杉田 健一郎; Dershowiz, W.

    2003-01-01

    During Heisei-14, Golder Associates provided support for JNC Tokai through data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aspo Task Force on Modelling of Groundwater Flow and Transport, and analysis of repository safety assessment technologies including cell networks for evaluation of the disturbed rock zone (DRZ) and total systems performance assessment (TSPA).

  8. Controller tuning of district heating networks using experiment design techniques

    International Nuclear Information System (INIS)

    Dobos, Laszlo; Abonyi, Janos

    2011-01-01

    There are various governmental policies aimed at reducing the dependence on fossil fuels for space heating and the reduction in its associated emission of greenhouse gases. DHNs (District heating networks) could provide an efficient method for house and space heating by utilizing residual industrial waste heat. In such systems, heat is produced and/or thermally upgraded in a central plant and then distributed to the end users through a pipeline network. The control strategies of these networks are rather difficult thanks to the non-linearity of the system and the strong interconnection between the controlled variables. That is why a NMPC (non-linear model predictive controller) could be applied to be able to fulfill the heat demand of the consumers. The main objective of this paper is to propose a tuning method for the applied NMPC to fulfill the control goal as soon as possible. The performance of the controller is characterized by an economic cost function based on pre-defined operation ranges. A methodology from the field of experiment design is applied to tune the model predictive controller to reach the best performance. The efficiency of the proposed methodology is proven throughout a case study of a simulated NMPC controlled DHN. -- Highlights: → To improve the energetic and economic efficiency of a DHN an appropriate control system is necessary. → The time consumption of transitions can be shortened with the proper control system. → A NLMPC is proposed as control system. → The NLMPC is tuned by utilization of simplex methodology, using an economic oriented cost function. → The proposed NLMPC needs a detailed model of the DHN based on the physical description.

  9. Distributed Sensor Network Software Development Testing through Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Brennan, Sean M. [Univ. of New Mexico, Albuquerque, NM (United States)

    2003-12-01

    The distributed sensor network (DSN) presents a novel and highly complex computing platform with dif culties and opportunities that are just beginning to be explored. The potential of sensor networks extends from monitoring for threat reduction, to conducting instant and remote inventories, to ecological surveys. Developing and testing for robust and scalable applications is currently practiced almost exclusively in hardware. The Distributed Sensors Simulator (DSS) is an infrastructure that allows the user to debug and test software for DSNs independent of hardware constraints. The exibility of DSS allows developers and researchers to investigate topological, phenomenological, networking, robustness and scaling issues, to explore arbitrary algorithms for distributed sensors, and to defeat those algorithms through simulated failure. The user speci es the topology, the environment, the application, and any number of arbitrary failures; DSS provides the virtual environmental embedding.

  10. PWR system simulation and parameter estimation with neural networks

    International Nuclear Information System (INIS)

    Akkurt, Hatice; Colak, Uener

    2002-01-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

  11. PWR system simulation and parameter estimation with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

    2002-11-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

  12. A New Multiscale Technique for Time-Accurate Geophysics Simulations

    Science.gov (United States)

    Omelchenko, Y. A.; Karimabadi, H.

    2006-12-01

    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  13. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  14. [Preparation of simulate craniocerebral models via three dimensional printing technique].

    Science.gov (United States)

    Lan, Q; Chen, A L; Zhang, T; Zhu, Q; Xu, T

    2016-08-09

    Three dimensional (3D) printing technique was used to prepare the simulate craniocerebral models, which were applied to preoperative planning and surgical simulation. The image data was collected from PACS system. Image data of skull bone, brain tissue and tumors, cerebral arteries and aneurysms, and functional regions and relative neural tracts of the brain were extracted from thin slice scan (slice thickness 0.5 mm) of computed tomography (CT), magnetic resonance imaging (MRI, slice thickness 1mm), computed tomography angiography (CTA), and functional magnetic resonance imaging (fMRI) data, respectively. MIMICS software was applied to reconstruct colored virtual models by identifying and differentiating tissues according to their gray scales. Then the colored virtual models were submitted to 3D printer which produced life-sized craniocerebral models for surgical planning and surgical simulation. 3D printing craniocerebral models allowed neurosurgeons to perform complex procedures in specific clinical cases though detailed surgical planning. It offered great convenience for evaluating the size of spatial fissure of sellar region before surgery, which helped to optimize surgical approach planning. These 3D models also provided detailed information about the location of aneurysms and their parent arteries, which helped surgeons to choose appropriate aneurismal clips, as well as perform surgical simulation. The models further gave clear indications of depth and extent of tumors and their relationship to eloquent cortical areas and adjacent neural tracts, which were able to avoid surgical damaging of important neural structures. As a novel and promising technique, the application of 3D printing craniocerebral models could improve the surgical planning by converting virtual visualization into real life-sized models.It also contributes to functional anatomy study.

  15. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  16. Radiotracer technique for leakage detection under simulated conditions

    International Nuclear Information System (INIS)

    Yelgaonkar, V.N.; Sharma, V.K.; Tapase, A.S.

    2001-01-01

    Radiotracer techniques are often used to locate leaks in underground pipelines. An attempt was made to standardize radiotracer pulse migration in terms of minimum detectable limit. For this purpose a 6 inch diameter 1200 long steel pipe was used. Two leak rates viz. 10 litres per minute and 1 litre per minute with an accuracy of ± 10% were simulated. The experiments on this pipeline showed that this method could be used to locate a leak of the order of 1 litre per minute in a 6 inch diameter isolated underground pipeline. (author)

  17. Simulation of Attacks for Security in Wireless Sensor Network.

    Science.gov (United States)

    Diaz, Alvaro; Sanchez, Pablo

    2016-11-18

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.

  18. Simulation of Attacks for Security in Wireless Sensor Network

    Science.gov (United States)

    Diaz, Alvaro; Sanchez, Pablo

    2016-01-01

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. PMID:27869710

  19. Strategies in edge plasma simulation using adaptive dynamic nodalization techniques

    International Nuclear Information System (INIS)

    Kainz, A.; Weimann, G.; Kamelander, G.

    2003-01-01

    A wide span of steady-state and transient edge plasma processes simulation problems require accurate discretization techniques and can then be treated with Finite Element (FE) and Finite Volume (FV) methods. The software used here to meet these meshing requirements is a 2D finite element grid generator. It allows to produce adaptive unstructured grids taking into consideration the flux surface characteristics. To comply with the common mesh handling features of FE/FV packages, some options have been added to the basic generation tool. These enhancements include quadrilateral meshes without non-regular transition elements obtained by substituting them by transition constructions consisting of regular quadrilateral elements. Furthermore triangular grids can be created with one edge parallel to the magnetic field and modified by the basic adaptation/realignment techniques. Enhanced code operation properties and processing capabilities are expected. (author)

  20. Parallel pic plasma simulation through particle decomposition techniques

    International Nuclear Information System (INIS)

    Briguglio, S.; Vlad, G.; Di Martino, B.; Naples, Univ. 'Federico II'

    1998-02-01

    Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a 'particle decomposition' technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem [it

  1. Exploiting Social Media Sensor Networks through Novel Data Fusion Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Kouri, Tina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    Unprecedented amounts of data are continuously being generated by sensors (“hard” data) and by humans (“soft” data), and this data needs to be exploited to its full potential. The first step in exploiting this data is determine how the hard and soft data are related to each other. In this project we fuse hard and soft data, using the attributes of each (e.g., time and space), to gain more information about interesting events. Next, we attempt to use social networking textual data to predict the present (i.e., predict that an interesting event is occurring and details about the event) using data mining, machine learning, natural language processing, and text analysis techniques.

  2. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    Science.gov (United States)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  3. Calibration Technique of the Irradiated Thermocouple using Artificial Neural Network

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jin Tae; Joung, Chang Young; Ahn, Sung Ho; Yang, Tae Ho; Heo, Sung Ho; Jang, Seo Yoon [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    To correct the signals, the degradation rate of sensors needs to be analyzed, and re-calibration of sensors should be followed periodically. In particular, because thermocouples instrumented in the nuclear fuel rod are degraded owing to the high neutron fluence generated from the nuclear fuel, the periodic re-calibration process is necessary. However, despite the re-calibration of the thermocouple, the measurement error will be increased until next re-calibration. In this study, based on the periodically calibrated temperature - voltage data, an interpolation technique using the artificial neural network will be introduced to minimize the calibration error of the C-type thermocouple under the irradiation test. The test result shows that the calculated voltages derived from the interpolation function have good agreement with the experimental sampling data, and they also accurately interpolate the voltages at arbitrary temperature and neutron fluence. That is, once the reference data is obtained by experiments, it is possible to accurately calibrate the voltage signal at a certain neutron fluence and temperature using an artificial neural network.

  4. ESIM_DSN Web-Enabled Distributed Simulation Network

    Science.gov (United States)

    Bedrossian, Nazareth; Novotny, John

    2002-01-01

    In this paper, the eSim(sup DSN) approach to achieve distributed simulation capability using the Internet is presented. With this approach a complete simulation can be assembled from component subsystems that run on different computers. The subsystems interact with each other via the Internet The distributed simulation uses a hub-and-spoke type network topology. It provides the ability to dynamically link simulation subsystem models to different computers as well as the ability to assign a particular model to each computer. A proof-of-concept demonstrator is also presented. The eSim(sup DSN) demonstrator can be accessed at http://www.jsc.draper.com/esim which hosts various examples of Web enabled simulations.

  5. CT simulation technique for craniospinal irradiation in supine position

    International Nuclear Information System (INIS)

    Lee, Suk; Kim, Yong Bae; Chu, Sung Sil; Suh, Chang Ok; Kwon, Soo Il

    2002-01-01

    In order to perform craniospinal irradiation (CSI) in the supine position on patients who are unable to lie in the prone position, a new simulation technique using a CT simulator was developed and its availability was evaluated. A CT simulator and a 3-D conformal treatment planning system were used to develop CSI in the supine position. The head and neck were immobilized with a thermoplastic mask in the supine position and the entire body was immobilized with a Vac-Loc. A volumetric image was then obtained using the CT simulator. In order to improve the reproducibility of the patients' setup, datum lines and points were marked on the head and the body. Virtual fluoroscopy was performed with the removal of visual obstacles such as the treatment table or the immobilization devices. After the virtual simulation, the treatment isocenters of each field were marked on the body and the immobilization devices at the conventional simulation room. Each treatment field was confirmed by comparing the fluoroscopy images with the digitally reconstructed radiography (DRR)/digitally composite radiography (DCR) images from the virtual simulation. The port verification films from the first treatment were also compared with the DRR/DCR images for a geometrical verification. CSI in the supine position was successfully performed in 9 patients. It required less than 20 minutes to construct the immobilization device and to obtain the whole body volumetric images. This made it possible to not only reduce the patients' inconvenience, but also to eliminate the position change variables during the long conventional simulation process. In addition, by obtaining the CT volumetric image, critical organs, such as the eyeballs and spinal cord, were better defined, and the accuracy of the port designs and shielding was improved. The difference between the DRRs and the portal films were less than 3 mm in the vertebral contour. CSI in the supine position is feasible in patients who cannot lie on

  6. CT simulation technique for craniospinal irradiation in supine position

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Suk; Kim, Yong Bae; Chu, Sung Sil; Suh, Chang Ok [Yonsei Cancer Center, College of Medicine, Yonsei University, Seoul (Korea, Republic of); Kwon, Soo Il [Kyonggi University, Seoul (Korea, Republic of)

    2002-06-15

    In order to perform craniospinal irradiation (CSI) in the supine position on patients who are unable to lie in the prone position, a new simulation technique using a CT simulator was developed and its availability was evaluated. A CT simulator and a 3-D conformal treatment planning system were used to develop CSI in the supine position. The head and neck were immobilized with a thermoplastic mask in the supine position and the entire body was immobilized with a Vac-Loc. A volumetric image was then obtained using the CT simulator. In order to improve the reproducibility of the patients' setup, datum lines and points were marked on the head and the body. Virtual fluoroscopy was performed with the removal of visual obstacles such as the treatment table or the immobilization devices. After the virtual simulation, the treatment isocenters of each field were marked on the body and the immobilization devices at the conventional simulation room. Each treatment field was confirmed by comparing the fluoroscopy images with the digitally reconstructed radiography (DRR)/digitally composite radiography (DCR) images from the virtual simulation. The port verification films from the first treatment were also compared with the DRR/DCR images for a geometrical verification. CSI in the supine position was successfully performed in 9 patients. It required less than 20 minutes to construct the immobilization device and to obtain the whole body volumetric images. This made it possible to not only reduce the patients' inconvenience, but also to eliminate the position change variables during the long conventional simulation process. In addition, by obtaining the CT volumetric image, critical organs, such as the eyeballs and spinal cord, were better defined, and the accuracy of the port designs and shielding was improved. The difference between the DRRs and the portal films were less than 3 mm in the vertebral contour. CSI in the supine position is feasible in patients who cannot

  7. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow

  8. Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.

    Science.gov (United States)

    Shen, Lin; Wu, Jingheng; Yang, Weitao

    2016-10-11

    Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.

  9. A New Simulation Technique for Study of Collisionless Shocks: Self-Adaptive Simulations

    International Nuclear Information System (INIS)

    Karimabadi, H.; Omelchenko, Y.; Driscoll, J.; Krauss-Varban, D.; Fujimoto, R.; Perumalla, K.

    2005-01-01

    The traditional technique for simulating physical systems modeled by partial differential equations is by means of time-stepping methodology where the state of the system is updated at regular discrete time intervals. This method has inherent inefficiencies. In contrast to this methodology, we have developed a new asynchronous type of simulation based on a discrete-event-driven (as opposed to time-driven) approach, where the simulation state is updated on a 'need-to-be-done-only' basis. Here we report on this new technique, show an example of particle acceleration in a fast magnetosonic shockwave, and briefly discuss additional issues that we are addressing concerning algorithm development and parallel execution

  10. First-order design of geodetic networks using the simulated annealing method

    Science.gov (United States)

    Berné, J. L.; Baselga, S.

    2004-09-01

    The general problem of the optimal design for a geodetic network subject to any extrinsic factors, namely the first-order design problem, can be dealt with as a numeric optimization problem. The classic theory of this problem and the optimization methods are revised. Then the innovative use of the simulated annealing method, which has been successfully applied in other fields, is presented for this classical geodetic problem. This method, belonging to iterative heuristic techniques in operational research, uses a thermodynamical analogy to crystalline networks to offer a solution that converges probabilistically to the global optimum. Basic formulation and some examples are studied.

  11. SIMULATION OF WIRELESS SENSOR NETWORK WITH HYBRID TOPOLOGY

    Directory of Open Access Journals (Sweden)

    J. Jaslin Deva Gifty

    2016-03-01

    Full Text Available The design of low rate Wireless Personal Area Network (WPAN by IEEE 802.15.4 standard has been developed to support lower data rates and low power consuming application. Zigbee Wireless Sensor Network (WSN works on the network and application layer in IEEE 802.15.4. Zigbee network can be configured in star, tree or mesh topology. The performance varies from topology to topology. The performance parameters such as network lifetime, energy consumption, throughput, delay in data delivery and sensor field coverage area varies depending on the network topology. In this paper, designing of hybrid topology by using two possible combinations such as star-tree and star-mesh is simulated to verify the communication reliability. This approach is to combine all the benefits of two network model. The parameters such as jitter, delay and throughput are measured for these scenarios. Further, MAC parameters impact such as beacon order (BO and super frame order (SO for low power consumption and high channel utilization, has been analysed for star, tree and mesh topology in beacon disable mode and beacon enable mode by varying CBR traffic loads.

  12. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  13. Using simulation-optimization techniques to improve multiphase aquifer remediation

    Energy Technology Data Exchange (ETDEWEB)

    Finsterle, S.; Pruess, K. [Lawrence Berkeley Laboratory, Berkeley, CA (United States)

    1995-03-01

    The T2VOC computer model for simulating the transport of organic chemical contaminants in non-isothermal multiphase systems has been coupled to the ITOUGH2 code which solves parameter optimization problems. This allows one to use linear programming and simulated annealing techniques to solve groundwater management problems, i.e. the optimization of operations for multiphase aquifer remediation. A cost function has to be defined, containing the actual and hypothetical expenses of a cleanup operation which depend - directly or indirectly - on the state variables calculated by T2VOC. Subsequently, the code iteratively determines a remediation strategy (e.g. pumping schedule) which minimizes, for instance, pumping and energy costs, the time for cleanup, and residual contamination. We discuss an illustrative sample problem to discuss potential applications of the code. The study shows that the techniques developed for estimating model parameters can be successfully applied to the solution of remediation management problems. The resulting optimum pumping scheme depends, however, on the formulation of the remediation goals and the relative weighting between individual terms of the cost function.

  14. Simulation error propagation for a dynamic rod worth measurement technique

    International Nuclear Information System (INIS)

    Kastanya, D.F.; Turinsky, P.J.

    1996-01-01

    KRSKO nuclear station, subsequently adapted by Westinghouse, introduced the dynamic rod worth measurement (DRWM) technique for measuring pressurized water reactor rod worths. This technique has the potential for reduced test time and primary loop waste water versus alternatives. The measurement is performed starting from a slightly supercritical state with all rods out (ARO), driving a bank in at the maximum stepping rate, and recording the ex-core detectors responses and bank position as a function of time. The static bank worth is obtained by (1) using the ex-core detectors' responses to obtain the core average flux (2) using the core average flux in the inverse point-kinetics equations to obtain the dynamic bank worth (3) converting the dynamic bank worth to the static bank worth. In this data interpretation process, various calculated quantities obtained from a core simulator are utilized. This paper presents an analysis of the sensitivity to the impact of core simulator errors on the deduced static bank worth

  15. Numerical techniques for large cosmological N-body simulations

    International Nuclear Information System (INIS)

    Efstathiou, G.; Davis, M.; Frenk, C.S.; White, S.D.M.

    1985-01-01

    We describe and compare techniques for carrying out large N-body simulations of the gravitational evolution of clustering in the fundamental cube of an infinite periodic universe. In particular, we consider both particle mesh (PM) codes and P 3 M codes in which a higher resolution force is obtained by direct summation of contributions from neighboring particles. We discuss the mesh-induced anisotropies in the forces calculated by these schemes, and the extent to which they can model the desired 1/r 2 particle-particle interaction. We also consider how transformation of the time variable can improve the efficiency with which the equations of motion are integrated. We present tests of the accuracy with which the resulting schemes conserve energy and are able to follow individual particle trajectories. We have implemented an algorithm which allows initial conditions to be set up to model any desired spectrum of linear growing mode density fluctuations. A number of tests demonstrate the power of this algorithm and delineate the conditions under which it is effective. We carry out several test simulations using a variety of techniques in order to show how the results are affected by dynamic range limitations in the force calculations, by boundary effects, by residual artificialities in the initial conditions, and by the number of particles employed. For most purposes cosmological simulations are limited by the resolution of their force calculation rather than by the number of particles they can employ. For this reason, while PM codes are quite adequate to study the evolution of structure on large scale, P 3 M methods are to be preferred, in spite of their greater cost and complexity, whenever the evolution of small-scale structure is important

  16. Simulation Tools and Techniques for Analyzing the Impacts of Photovoltaic System Integration

    Science.gov (United States)

    Hariri, Ali

    utility simulation software. On the other hand, EMT simulation tools provide high accuracy and visibility over a wide bandwidth of frequencies at the expense of larger processing and memory requirements, limited network size, and long simulation time. Therefore, there is a gap in simulation tools and techniques that can efficiently and effectively identify potential PV impact. New planning simulation tools are needed in order to accommodate for the simulation requirements of new integrated technologies in the electric grid. The dissertation at hand starts by identifying some of the potential impacts that are caused by high PV penetration. A phasor-based quasi-static time series (QSTS) analysis tool is developed in order to study the slow dynamics that are caused by the variations in the PV generation that lead to voltage fluctuations. Moreover, some EMT simulations are performed in order to study the impacts of PV systems on the electric network harmonic levels. These studies provide insights into the type and duration of certain impacts, as well as the conditions that may lead to adverse phenomena. In addition these studies present an idea about the type of simulation tools that are sufficient for each type of study. After identifying some of the potential impacts, certain planning tools and techniques are proposed. The potential PV impacts may cause certain utilities to refrain from integrating PV systems into their networks. However, each electric network has a certain limit beyond which the impacts become substantial and may adversely interfere with the system operation and the equipment along the feeder; this limit is referred to as the hosting limit (or hosting capacity). Therefore, it is important for utilities to identify the PV hosting limit on a specific electric network in order to safely and confidently integrate the maximum possible PV systems. In the following dissertation, two approaches have been proposed for identifying the hosing limit: 1. Analytical

  17. SIMULATION OF NEGATIVE PRESSURE WAVE PROPAGATION IN WATER PIPE NETWORK

    Directory of Open Access Journals (Sweden)

    Tang Van Lam

    2017-11-01

    Full Text Available Subject: factors such as pipe wall roughness, mechanical properties of pipe materials, physical properties of water affect the pressure surge in the water supply pipes. These factors make it difficult to analyze the transient problem of pressure evolution using simple programming language, especially in the studies that consider only the magnitude of the positive pressure surge with the negative pressure phase being neglected. Research objectives: determine the magnitude of the negative pressure in the pipes on the experimental model. The propagation distance of the negative pressure wave will be simulated by the valve closure scenarios with the help of the HAMMER software and it is compared with an experimental model to verify the quality the results. Materials and methods: academic version of the Bentley HAMMER software is used to simulate the pressure surge wave propagation due to closure of the valve in water supply pipe network. The method of characteristics is used to solve the governing equations of transient process of pressure change in the pipeline. This method is implemented in the HAMMER software to calculate the pressure surge value in the pipes. Results: the method has been applied for water pipe networks of experimental model, the results show the affected area of negative pressure wave from valve closure and thereby we assess the largest negative pressure that may appear in water supply pipes. Conclusions: the experiment simulates the water pipe network with a consumption node for various valve closure scenarios to determine possibility of appearance of maximum negative pressure value in the pipes. Determination of these values in real-life network is relatively costly and time-consuming but nevertheless necessary for identification of the risk of pipe failure, and therefore, this paper proposes using the simulation model by the HAMMER software. Initial calibration of the model combined with the software simulation results and

  18. OPNET simulation Signaling System No.7 (SS7) network interfaces

    OpenAIRE

    Ow, Kong Chung.

    2000-01-01

    This thesis presents an OPNET model and simulation of the Signaling System No.7 (SS7) network, which is dubbed the world's largest data communications network. The main focus of the study is to model one of its levels, the Message Transfer Part Level 3, in accordance with the ITU.T recommendation Q.704. An overview of SS7 that includes the evolution and basics of SS7 architecture is provided to familarize the reader with the topic. This includes the protocol stack, signaling points, signaling...

  19. Simulation of wind turbine wakes using the actuator line technique.

    Science.gov (United States)

    Sørensen, Jens N; Mikkelsen, Robert F; Henningson, Dan S; Ivanell, Stefan; Sarmast, Sasan; Andersen, Søren J

    2015-02-28

    The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  20. Simulating market dynamics: interactions between consumer psychology and social networks.

    Science.gov (United States)

    Janssen, Marco A; Jager, Wander

    2003-01-01

    Markets can show different types of dynamics, from quiet markets dominated by one or a few products, to markets with continual penetration of new and reintroduced products. In a previous article we explored the dynamics of markets from a psychological perspective using a multi-agent simulation model. The main results indicated that the behavioral rules dominating the artificial consumer's decision making determine the resulting market dynamics, such as fashions, lock-in, and unstable renewal. Results also show the importance of psychological variables like social networks, preferences, and the need for identity to explain the dynamics of markets. In this article we extend this work in two directions. First, we will focus on a more systematic investigation of the effects of different network structures. The previous article was based on Watts and Strogatz's approach, which describes the small-world and clustering characteristics in networks. More recent research demonstrated that many large networks display a scale-free power-law distribution for node connectivity. In terms of market dynamics this may imply that a small proportion of consumers may have an exceptional influence on the consumptive behavior of others (hubs, or early adapters). We show that market dynamics is a self-organized property depending on the interaction between the agents' decision-making process (heuristics), the product characteristics (degree of satisfaction of unit of consumption, visibility), and the structure of interactions between agents (size of network and hubs in a social network).

  1. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  2. Location estimation in wireless sensor networks using spring-relaxation technique.

    Science.gov (United States)

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  3. Location Estimation in Wireless Sensor Networks Using Spring-Relaxation Technique

    Directory of Open Access Journals (Sweden)

    Qing Zhang

    2010-05-01

    Full Text Available Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN. Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  4. Quality-of-Service Routing Using Path and Power Aware Techniques in Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    R. Asokan

    2008-01-01

    Full Text Available Mobile ad hoc network (MANET is a collection of wireless mobile hosts dynamically forming a temporary network without the aid of any existing established infrastructure. Quality of service (QoS is a set of service requirements that needs to be met by the network while transporting a packet stream from a source to its destination. QoS support MANETs is a challenging task due to the dynamic topology and limited resources. The main objective of this paper is to enhance the QoS routing for MANET using temporally ordered routing algorithm (TORA with self-healing and optimized routing techniques (SHORT. SHORT improves routing optimality by monitoring routing paths continuously and redirecting the path whenever a shortcut path is available. In this paper, the performance comparison of TORA and TORA with SHORT has been analyzed using network simulator for various parameters. TORA with SHORT enhances performance of TORA in terms of throughput, packet loss, end-to-end delay, and energy.

  5. Personalizes lung motion simulation fore external radiotherapy using an artificial neural network

    International Nuclear Information System (INIS)

    Laurent, R.

    2011-01-01

    The development of new techniques in the field of external radiotherapy opens new ways of gaining accuracy in dose distribution, in particular through the knowledge of individual lung motion. The numeric simulation NEMOSIS (Neural Network Motion Simulation System) we describe is based on artificial neural networks (ANN) and allows, in addition to determining motion in a personalized way, to reduce the necessary initial doses to determine it. In the first part, we will present current treatment options, lung motion as well as existing simulation or estimation methods. The second part describes the artificial neural network used and the steps for defining its parameters. An accurate evaluation of our approach was carried out on original patient data. The obtained results are compared with an existing motion estimated method. The extremely short computing time, in the range of milliseconds for the generation of one respiratory phase, would allow its use in clinical routine. Modifications to NEMOSIS in order to meet the requirements for its use in external radiotherapy are described, and a study of the motion of tumor outlines is carried out. This work lays the basis for lung motion simulation with ANNs and validates our approach. Its real time implementation coupled to its predication accuracy makes NEMOSIS promising tool for the simulation of motion synchronized with breathing. (author)

  6. A simulated annealing approach for redesigning a warehouse network problem

    Science.gov (United States)

    Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia

    2017-09-01

    Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.

  7. Digitalization and networking of analog simulators and portal images

    Energy Technology Data Exchange (ETDEWEB)

    Pesznyak, C.; Zarand, P.; Mayer, A. [Uzsoki Hospital, Budapest (Hungary). Inst. of Oncoradiology

    2007-03-15

    Background: Many departments have analog simulators and irradiation facilities (especially cobalt units) without electronic portal imaging. Import of the images into the R and V (Record and Verify) system is required. Material and Methods: Simulator images are grabbed while portal films scanned by using a laser scanner and both converted into DICOM RT (Digital Imaging and Communications in Medicine Radiotherapy) images. Results: Image intensifier output of a simulator and portal films are converted to DICOM RT images and used in clinical practice. The simulator software was developed in cooperation at the authors' hospital. Conclusion: The digitalization of analog simulators is a valuable updating in clinical use replacing screen-film technique. Film scanning and digitalization permit the electronic archiving of films. Conversion into DICOM RT images is a precondition of importing to the R and V system. (orig.)

  8. Digitalization and networking of analog simulators and portal images.

    Science.gov (United States)

    Pesznyák, Csilla; Zaránd, Pál; Mayer, Arpád

    2007-03-01

    Many departments have analog simulators and irradiation facilities (especially cobalt units) without electronic portal imaging. Import of the images into the R&V (Record & Verify) system is required. Simulator images are grabbed while portal films scanned by using a laser scanner and both converted into DICOM RT (Digital Imaging and Communications in Medicine Radiotherapy) images. Image intensifier output of a simulator and portal films are converted to DICOM RT images and used in clinical practice. The simulator software was developed in cooperation at the authors' hospital. The digitalization of analog simulators is a valuable updating in clinical use replacing screen-film technique. Film scanning and digitalization permit the electronic archiving of films. Conversion into DICOM RT images is a precondition of importing to the R&V system.

  9. Image reconstruction using Monte Carlo simulation and artificial neural networks

    International Nuclear Information System (INIS)

    Emert, F.; Missimner, J.; Blass, W.; Rodriguez, A.

    1997-01-01

    PET data sets are subject to two types of distortions during acquisition: the imperfect response of the scanner and attenuation and scattering in the active distribution. In addition, the reconstruction of voxel images from the line projections composing a data set can introduce artifacts. Monte Carlo simulation provides a means for modeling the distortions and artificial neural networks a method for correcting for them as well as minimizing artifacts. (author) figs., tab., refs

  10. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  11. A Method for Dynamically Selecting the Best Frequency Hopping Technique in Industrial Wireless Sensor Network Applications.

    Science.gov (United States)

    Fernández de Gorostiza, Erlantz; Berzosa, Jorge; Mabe, Jon; Cortiñas, Roberto

    2018-02-23

    Industrial wireless applications often share the communication channel with other wireless technologies and communication protocols. This coexistence produces interferences and transmission errors which require appropriate mechanisms to manage retransmissions. Nevertheless, these mechanisms increase the network latency and overhead due to the retransmissions. Thus, the loss of data packets and the measures to handle them produce an undesirable drop in the QoS and hinder the overall robustness and energy efficiency of the network. Interference avoidance mechanisms, such as frequency hopping techniques, reduce the need for retransmissions due to interferences but they are often tailored to specific scenarios and are not easily adapted to other use cases. On the other hand, the total absence of interference avoidance mechanisms introduces a security risk because the communication channel may be intentionally attacked and interfered with to hinder or totally block it. In this paper we propose a method for supporting the design of communication solutions under dynamic channel interference conditions and we implement dynamic management policies for frequency hopping technique and channel selection at runtime. The method considers several standard frequency hopping techniques and quality metrics, and the quality and status of the available frequency channels to propose the best combined solution to minimize the side effects of interferences. A simulation tool has been developed and used in this work to validate the method.

  12. A Comparative Study of Anomaly Detection Techniques for Smart City Wireless Sensor Networks.

    Science.gov (United States)

    Garcia-Font, Victor; Garrigues, Carles; Rifà-Pous, Helena

    2016-06-13

    In many countries around the world, smart cities are becoming a reality. These cities contribute to improving citizens' quality of life by providing services that are normally based on data extracted from wireless sensor networks (WSN) and other elements of the Internet of Things. Additionally, public administration uses these smart city data to increase its efficiency, to reduce costs and to provide additional services. However, the information received at smart city data centers is not always accurate, because WSNs are sometimes prone to error and are exposed to physical and computer attacks. In this article, we use real data from the smart city of Barcelona to simulate WSNs and implement typical attacks. Then, we compare frequently used anomaly detection techniques to disclose these attacks. We evaluate the algorithms under different requirements on the available network status information. As a result of this study, we conclude that one-class Support Vector Machines is the most appropriate technique. We achieve a true positive rate at least 56% higher than the rates achieved with the other compared techniques in a scenario with a maximum false positive rate of 5% and a 26% higher in a scenario with a false positive rate of 15%.

  13. A Comparative Study of Anomaly Detection Techniques for Smart City Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Victor Garcia-Font

    2016-06-01

    Full Text Available In many countries around the world, smart cities are becoming a reality. These cities contribute to improving citizens’ quality of life by providing services that are normally based on data extracted from wireless sensor networks (WSN and other elements of the Internet of Things. Additionally, public administration uses these smart city data to increase its efficiency, to reduce costs and to provide additional services. However, the information received at smart city data centers is not always accurate, because WSNs are sometimes prone to error and are exposed to physical and computer attacks. In this article, we use real data from the smart city of Barcelona to simulate WSNs and implement typical attacks. Then, we compare frequently used anomaly detection techniques to disclose these attacks. We evaluate the algorithms under different requirements on the available network status information. As a result of this study, we conclude that one-class Support Vector Machines is the most appropriate technique. We achieve a true positive rate at least 56% higher than the rates achieved with the other compared techniques in a scenario with a maximum false positive rate of 5% and a 26% higher in a scenario with a false positive rate of 15%.

  14. Parallel Reservoir Simulations with Sparse Grid Techniques and Applications to Wormhole Propagation

    KAUST Repository

    Wu, Yuanqing

    2015-01-01

    the traditional simulation technique relying on the Darcy framework, we propose a new framework called Darcy-Brinkman-Forchheimer framework to simulate wormhole propagation. Furthermore, to process the large quantity of cells in the simulation grid and shorten

  15. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.; Hussain, Syed Imtiaz; Ç elebi, Hasari Burak; Abdallah, Mohamed M.; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine

  16. A fuzzy network module extraction technique for gene expression data

    Indian Academy of Sciences (India)

    2014-05-01

    expression network from the distance matrix. The distance matrix is .... mental process, cellular component assembly involved in ..... the molecules are present in the network. User can ... hsa05213:Endometrial cancer. 24. 0.07.

  17. Analyzing, Modeling, and Simulation for Human Dynamics in Social Network

    Directory of Open Access Journals (Sweden)

    Yunpeng Xiao

    2012-01-01

    Full Text Available This paper studies the human behavior in the top-one social network system in China (Sina Microblog system. By analyzing real-life data at a large scale, we find that the message releasing interval (intermessage time obeys power law distribution both at individual level and at group level. Statistical analysis also reveals that human behavior in social network is mainly driven by four basic elements: social pressure, social identity, social participation, and social relation between individuals. Empirical results present the four elements' impact on the human behavior and the relation between these elements. To further understand the mechanism of such dynamic phenomena, a hybrid human dynamic model which combines “interest” of individual and “interaction” among people is introduced, incorporating the four elements simultaneously. To provide a solid evaluation, we simulate both two-agent and multiagent interactions with real-life social network topology. We achieve the consistent results between empirical studies and the simulations. The model can provide a good understanding of human dynamics in social network.

  18. Evaluation of convergence behavior of metamodeling techniques for bridging scales in multi-scale multimaterial simulation

    International Nuclear Information System (INIS)

    Sen, Oishik; Davis, Sean; Jacobs, Gustaaf; Udaykumar, H.S.

    2015-01-01

    The effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. This is done with the express purpose of using metamodels to bridge scales between micro- and macro-scale models in a multi-scale multimaterial simulation. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver

  19. [Simulation of lung motions using an artificial neural network].

    Science.gov (United States)

    Laurent, R; Henriet, J; Salomon, M; Sauget, M; Nguyen, F; Gschwind, R; Makovicka, L

    2011-04-01

    A way to improve the accuracy of lung radiotherapy for a patient is to get a better understanding of its lung motion. Indeed, thanks to this knowledge it becomes possible to follow the displacements of the clinical target volume (CTV) induced by the lung breathing. This paper presents a feasibility study of an original method to simulate the positions of points in patient's lung at all breathing phases. This method, based on an artificial neural network, allowed learning the lung motion on real cases and then to simulate it for new patients for which only the beginning and the end breathing data are known. The neural network learning set is made up of more than 600 points. These points, shared out on three patients and gathered on a specific lung area, were plotted by a MD. The first results are promising: an average accuracy of 1mm is obtained for a spatial resolution of 1 × 1 × 2.5mm(3). We have demonstrated that it is possible to simulate lung motion with accuracy using an artificial neural network. As future work we plan to improve the accuracy of our method with the addition of new patient data and a coverage of the whole lungs. Copyright © 2010 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  20. Simulation of lung motions using an artificial neural network

    International Nuclear Information System (INIS)

    Laurent, R.; Henriet, J.; Sauget, M.; Gschwind, R.; Makovicka, L.; Salomon, M.; Nguyen, F.

    2011-01-01

    Purpose. A way to improve the accuracy of lung radiotherapy for a patient is to get a better understanding of its lung motion. Indeed, thanks to this knowledge it becomes possible to follow the displacements of the clinical target volume (CTV) induced by the lung breathing. This paper presents a feasibility study of an original method to simulate the positions of points in patient's lung at all breathing phases. Patients and methods. This method, based on an artificial neural network, allowed learning the lung motion on real cases and then to simulate it for new patients for which only the beginning and the end breathing data are known. The neural network learning set is made up of more than 600 points. These points, shared out on three patients and gathered on a specific lung area, were plotted by a MD. Results. - The first results are promising: an average accuracy of 1 mm is obtained for a spatial resolution of 1 x 1 x 2.5 mm 3 . Conclusion. We have demonstrated that it is possible to simulate lung motion with accuracy using an artificial neural network. As future work we plan to improve the accuracy of our method with the addition of new patient data and a coverage of the whole lungs. (authors)

  1. Analysis and simulation of wireless signal propagation applying geostatistical interpolation techniques

    Science.gov (United States)

    Kolyaie, S.; Yaghooti, M.; Majidi, G.

    2011-12-01

    This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.

  2. The summarize of the technique about proactive network security protection

    International Nuclear Information System (INIS)

    Liu Baoxu; Li Xueying; Cao Aijuan; Yu Chuansong; Xu Rongsheng

    2003-01-01

    The proactive protection measures and the traditional passive security protection tools are complementarities each other. It also can supply the conventional network security protection system and enhance its capability of the security protection. Based upon sorts of existing network security technologies, this article analyses and summarizes the technologies, functions and the development directions of some key proactive network security protection tools. (authors)

  3. Reinforcement learning techniques for controlling resources in power networks

    Science.gov (United States)

    Kowli, Anupama Sunil

    As power grids transition towards increased reliance on renewable generation, energy storage and demand response resources, an effective control architecture is required to harness the full functionalities of these resources. There is a critical need for control techniques that recognize the unique characteristics of the different resources and exploit the flexibility afforded by them to provide ancillary services to the grid. The work presented in this dissertation addresses these needs. Specifically, new algorithms are proposed, which allow control synthesis in settings wherein the precise distribution of the uncertainty and its temporal statistics are not known. These algorithms are based on recent developments in Markov decision theory, approximate dynamic programming and reinforcement learning. They impose minimal assumptions on the system model and allow the control to be "learned" based on the actual dynamics of the system. Furthermore, they can accommodate complex constraints such as capacity and ramping limits on generation resources, state-of-charge constraints on storage resources, comfort-related limitations on demand response resources and power flow limits on transmission lines. Numerical studies demonstrating applications of these algorithms to practical control problems in power systems are discussed. Results demonstrate how the proposed control algorithms can be used to improve the performance and reduce the computational complexity of the economic dispatch mechanism in a power network. We argue that the proposed algorithms are eminently suitable to develop operational decision-making tools for large power grids with many resources and many sources of uncertainty.

  4. A Monte Carlo simulation technique to determine the optimal portfolio

    Directory of Open Access Journals (Sweden)

    Hassan Ghodrati

    2014-03-01

    Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.

  5. Flow MRI simulation in complex 3D geometries: Application to the cerebral venous network.

    Science.gov (United States)

    Fortin, Alexandre; Salmon, Stéphanie; Baruthio, Joseph; Delbany, Maya; Durand, Emmanuel

    2018-02-05

    Develop and evaluate a complete tool to include 3D fluid flows in MRI simulation, leveraging from existing software. Simulation of MR spin flow motion is of high interest in the study of flow artifacts and angiography. However, at present, only a few simulators include this option and most are restricted to static tissue imaging. An extension of JEMRIS, one of the most advanced high performance open-source simulation platforms to date, was developed. The implementation of a Lagrangian description of the flow allows simulating any MR experiment, including both static tissues and complex flow data from computational fluid dynamics. Simulations of simple flow models are compared with real experiments on a physical flow phantom. A realistic simulation of 3D flow MRI on the cerebral venous network is also carried out. Simulations and real experiments are in good agreement. The generality of the framework is illustrated in 2D and 3D with some common flow artifacts (misregistration and inflow enhancement) and with the three main angiographic techniques: phase contrast velocimetry (PC), time-of-flight, and contrast-enhanced imaging MRA. The framework provides a versatile and reusable tool for the simulation of any MRI experiment including physiological fluids and arbitrarily complex flow motion. © 2018 International Society for Magnetic Resonance in Medicine.

  6. An Expert System And Simulation Approach For Sensor Management & Control In A Distributed Surveillance Network

    Science.gov (United States)

    Leon, Barbara D.; Heller, Paul R.

    1987-05-01

    A surveillance network is a group of multiplatform sensors cooperating to improve network performance. Network control is distributed as a measure to decrease vulnerability to enemy threat. The network may contain diverse sensor types such as radar, ESM (Electronic Support Measures), IRST (Infrared search and track) and E-0 (Electro-Optical). Each platform may contain a single sensor or suite of sensors. In a surveillance network it is desirable to control sensors to make the overall system more effective. This problem has come to be known as sensor management and control (SM&C). Two major facets of network performance are surveillance and survivability. In a netted environment, surveillance can be enhanced if information from all sensors is combined and sensor operating conditions are controlled to provide a synergistic effect. In contrast, when survivability is the main concern for the network, the best operating status for all sensors would be passive or off. Of course, improving survivability tends to degrade surveillance. Hence, the objective of SM&C is to optimize surveillance and survivability of the network. Too voluminous data of various formats and the quick response time are two characteristics of this problem which make it an ideal application for Artificial Intelligence. A solution to the SM&C problem, presented as a computer simulation, will be presented in this paper. The simulation is a hybrid production written in LISP and FORTRAN. It combines the latest conventional computer programming methods with Artificial Intelligence techniques to produce a flexible state-of-the-art tool to evaluate network performance. The event-driven simulation contains environment models coupled with an expert system. These environment models include sensor (track-while-scan and agile beam) and target models, local tracking, and system tracking. These models are used to generate the environment for the sensor management and control expert system. The expert system

  7. Comparative Analysis of Disruption Tolerant Network Routing Simulations in the One and NS-3

    Science.gov (United States)

    2017-12-01

    The added levels of simulation increase the processing required by a simulation . ns-3’s simulation of other layers of the network stack permits...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS COMPARATIVE ANALYSIS OF DISRUPTION TOLERANT NETWORK ROUTING SIMULATIONS IN THE ONE AND NS-3...Thesis 03-23-2016 to 12-15-2017 4. TITLE AND SUBTITLE COMPARATIVE ANALYSIS OF DISRUPTION TOLERANT NETWORK ROUTING SIMULATIONS IN THE ONE AND NS-3 5

  8. An artifical neural network for detection of simulated dental caries

    Energy Technology Data Exchange (ETDEWEB)

    Kositbowornchai, S. [Khon Kaen Univ. (Thailand). Dept. of Oral Diagnosis; Siriteptawee, S.; Plermkamon, S.; Bureerat, S. [Khon Kaen Univ. (Thailand). Dept. of Mechanical Engineering; Chetchotsak, D. [Khon Kaen Univ. (Thailand). Dept. of Industrial Engineering

    2006-08-15

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  9. An artifical neural network for detection of simulated dental caries

    International Nuclear Information System (INIS)

    Kositbowornchai, S.; Siriteptawee, S.; Plermkamon, S.; Bureerat, S.; Chetchotsak, D.

    2006-01-01

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  10. Biochemical Network Stochastic Simulator (BioNetS: software for stochastic modeling of biochemical networks

    Directory of Open Access Journals (Sweden)

    Elston Timothy C

    2004-03-01

    Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  11. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  12. Real-Time-Simulation of IEEE-5-Bus Network on OPAL-RT-OP4510 Simulator

    Science.gov (United States)

    Atul Bhandakkar, Anjali; Mathew, Lini, Dr.

    2018-03-01

    The Real-Time Simulator tools have high computing technologies, improved performance. They are widely used for design and improvement of electrical systems. The advancement of the software tools like MATLAB/SIMULINK with its Real-Time Workshop (RTW) and Real-Time Windows Target (RTWT), real-time simulators are used extensively in many engineering fields, such as industry, education, and research institutions. OPAL-RT-OP4510 is a Real-Time Simulator which is used in both industry and academia. In this paper, the real-time simulation of IEEE-5-Bus network is carried out by means of OPAL-RT-OP4510 with CRO and other hardware. The performance of the network is observed with the introduction of fault at various locations. The waveforms of voltage, current, active and reactive power are observed in the MATLAB simulation environment and on the CRO. Also, Load Flow Analysis (LFA) of IEEE-5-Bus network is computed using MATLAB/Simulink power-gui load flow tool.

  13. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-01-01

    Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  14. A dynamic approach merging network theory and credit risk techniques to assess systemic risk in financial networks.

    Science.gov (United States)

    Petrone, Daniele; Latora, Vito

    2018-04-03

    The interconnectedness of financial institutions affects instability and credit crises. To quantify systemic risk we introduce here the PD model, a dynamic model that combines credit risk techniques with a contagion mechanism on the network of exposures among banks. A potential loss distribution is obtained through a multi-period Monte Carlo simulation that considers the probability of default (PD) of the banks and their tendency of defaulting in the same time interval. A contagion process increases the PD of banks exposed toward distressed counterparties. The systemic risk is measured by statistics of the loss distribution, while the contribution of each node is quantified by the new measures PDRank and PDImpact. We illustrate how the model works on the network of the European Global Systemically Important Banks. For a certain range of the banks' capital and of their assets volatility, our results reveal the emergence of a strong contagion regime where lower default correlation between banks corresponds to higher losses. This is the opposite of the diversification benefits postulated by standard credit risk models used by banks and regulators who could therefore underestimate the capital needed to overcome a period of crisis, thereby contributing to the financial system instability.

  15. Improved Space Surveillance Network (SSN) Scheduling using Artificial Intelligence Techniques

    Science.gov (United States)

    Stottler, D.

    There are close to 20,000 cataloged manmade objects in space, the large majority of which are not active, functioning satellites. These are tracked by phased array and mechanical radars and ground and space-based optical telescopes, collectively known as the Space Surveillance Network (SSN). A better SSN schedule of observations could, using exactly the same legacy sensor resources, improve space catalog accuracy through more complementary tracking, provide better responsiveness to real-time changes, better track small debris in low earth orbit (LEO) through efficient use of applicable sensors, efficiently track deep space (DS) frequent revisit objects, handle increased numbers of objects and new types of sensors, and take advantage of future improved communication and control to globally optimize the SSN schedule. We have developed a scheduling algorithm that takes as input the space catalog and the associated covariance matrices and produces a globally optimized schedule for each sensor site as to what objects to observe and when. This algorithm is able to schedule more observations with the same sensor resources and have those observations be more complementary, in terms of the precision with which each orbit metric is known, to produce a satellite observation schedule that, when executed, minimizes the covariances across the entire space object catalog. If used operationally, the results would be significantly increased accuracy of the space catalog with fewer lost objects with the same set of sensor resources. This approach inherently can also trade-off fewer high priority tasks against more lower-priority tasks, when there is benefit in doing so. Currently the project has completed a prototyping and feasibility study, using open source data on the SSN's sensors, that showed significant reduction in orbit metric covariances. The algorithm techniques and results will be discussed along with future directions for the research.

  16. Supply chain simulation tools and techniques: a survey

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    The main contribution of this paper is twofold: it surveys different types of simulation for supply chain management; it discusses several methodological issues. These different types of simulation are spreadsheet simulation, system dynamics, discrete-event simulation and business games. Which

  17. eLearning techniques supporting problem based learning in clinical simulation.

    Science.gov (United States)

    Docherty, Charles; Hoy, Derek; Topp, Helena; Trinder, Kathryn

    2005-08-01

    This paper details the results of the first phase of a project using eLearning to support students' learning within a simulated environment. The locus was a purpose built clinical simulation laboratory (CSL) where the School's philosophy of problem based learning (PBL) was challenged through lecturers using traditional teaching methods. a student-centred, problem based approach to the acquisition of clinical skills that used high quality learning objects embedded within web pages, substituting for lecturers providing instruction and demonstration. This encouraged student nurses to explore, analyse and make decisions within the safety of a clinical simulation. Learning was facilitated through network communications and reflection on video performances of self and others. Evaluations were positive, students demonstrating increased satisfaction with PBL, improved performance in exams, and increased self-efficacy in the performance of nursing activities. These results indicate that eLearning techniques can help students acquire clinical skills in the safety of a simulated environment within the context of a problem based learning curriculum.

  18. Prediction of Monthly Summer Monsoon Rainfall Using Global Climate Models Through Artificial Neural Network Technique

    Science.gov (United States)

    Nair, Archana; Singh, Gurjeet; Mohanty, U. C.

    2018-01-01

    The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.

  19. COEL: A Cloud-based Reaction Network Simulator

    Directory of Open Access Journals (Sweden)

    Peter eBanda

    2016-04-01

    Full Text Available Chemical Reaction Networks (CRNs are a formalism to describe the macroscopic behavior of chemical systems. We introduce COEL, a web- and cloud-based CRN simulation framework that does not require a local installation, runs simulations on a large computational grid, provides reliable database storage, and offers a visually pleasing and intuitive user interface. We present an overview of the underlying software, the technologies, and the main architectural approaches employed. Some of COEL's key features include ODE-based simulations of CRNs and multicompartment reaction networks with rich interaction options, a built-in plotting engine, automatic DNA-strand displacement transformation and visualization, SBML/Octave/Matlab export, and a built-in genetic-algorithm-based optimization toolbox for rate constants.COEL is an open-source project hosted on GitHub (http://dx.doi.org/10.5281/zenodo.46544, which allows interested research groups to deploy it on their own sever. Regular users can simply use the web instance at no cost at http://coel-sim.org. The framework is ideally suited for a collaborative use in both research and education.

  20. Performance Comparison of Reputation Assessment Techniques Based on Self-Organizing Maps in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sabrina Sicari

    2017-01-01

    Full Text Available Many solutions based on machine learning techniques have been proposed in literature aimed at detecting and promptly counteracting various kinds of malicious attack (data violation, clone, sybil, neglect, greed, and DoS attacks, which frequently affect Wireless Sensor Networks (WSNs. Besides recognizing the corrupted or violated information, also the attackers should be identified, in order to activate the proper countermeasures for preserving network’s resources and to mitigate their malicious effects. To this end, techniques adopting Self-Organizing Maps (SOM for intrusion detection in WSN were revealed to represent a valuable and effective solution to the problem. In this paper, the mechanism, namely, Good Network (GoNe, which is based on SOM and is able to assess the reliability of the sensor nodes, is compared with another relevant and similar work existing in literature. Extensive performance simulations, in terms of nodes’ classification, attacks’ identification, data accuracy, energy consumption, and signalling overhead, have been carried out in order to demonstrate the better feasibility and efficiency of the proposed solution in WSN field.

  1. Cross-Layer Techniques for Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Yufeng Shan

    2005-02-01

    Full Text Available Real-time streaming media over wireless networks is a challenging proposition due to the characteristics of video data and wireless channels. In this paper, we propose a set of cross-layer techniques for adaptive real-time video streaming over wireless networks. The adaptation is done with respect to both channel and data. The proposed novel packetization scheme constructs the application layer packet in such a way that it is decomposed exactly into an integer number of equal-sized radio link protocol (RLP packets. FEC codes are applied within an application packet at the RLP packet level rather than across different application packets and thus reduce delay at the receiver. A priority-based ARQ, together with a scheduling algorithm, is applied at the application layer to retransmit only the corrupted RLP packets within an application layer packet. Our approach combines the flexibility and programmability of application layer adaptations, with low delay and bandwidth efficiency of link layer techniques. Socket-level simulations are presented to verify the effectiveness of our approach.

  2. Wireless multimedia sensor networks on reconfigurable hardware information reduction techniques

    CERN Document Server

    Ang, Li-minn; Chew, Li Wern; Yeong, Lee Seng; Chia, Wai Chong

    2013-01-01

    Traditional wireless sensor networks (WSNs) capture scalar data such as temperature, vibration, pressure, or humidity. Motivated by the success of WSNs and also with the emergence of new technology in the form of low-cost image sensors, researchers have proposed combining image and audio sensors with WSNs to form wireless multimedia sensor networks (WMSNs).

  3. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichiro; Dershowitz, William

    2003-01-01

    During Heisei-14, Golder Associates provided support for JNC Tokai through data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport, and analysis of repository safety assessment technologies including cell networks for evaluation of the disturbed rock zone (DRZ) and total systems performance assessment (TSPA). MIU Underground Rock Laboratory support during H-14 involved discrete fracture network (DFN) modelling in support of the Multiple Modelling Project (MMP) and the Long Term Pumping Test (LPT). Golder developed updated DFN models for the MIU site, reflecting updated analyses of fracture data. Golder also developed scripts to support JNC simulations of flow and transport pathways within the MMP. Golder supported JNC participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport during H-14. Task 6A and 6B compared safety assessment (PA) and experimental time scale simulations along a pipe transport pathway. Task 6B2 extended Task 6B simulations from 1-D to 2-D. For Task 6B2, Golder carried out single fracture transport simulations on a wide variety of generic heterogeneous 2D fractures using both experimental and safety assessment boundary conditions. The heterogeneous 2D fractures were implemented according to a variety of in plane heterogeneity patterns. Multiple immobile zones were considered including stagnant zones, infillings, altered wall rock, and intact rock. During H-14, JNC carried out extensive studies of the distributed rock zone (DRZ) surrounding repository tunnels and drifts. Golder supported this activity be evaluating the calculation time necessary for simulating a reference heterogeneous DRZ cell network for a range of computational strategies. To support the development of JNC's total system performance assessment (TSPA) strategy, Golder carried out a review of the US DOE Yucca Mountain Project TSPA. This

  4. Coarse-grained simulation of a real-time process control network under peak load

    International Nuclear Information System (INIS)

    George, A.D.; Clapp, N.E. Jr.

    1992-01-01

    This paper presents a simulation study on the real-time process control network proposed for the new ANS reactor system at ORNL. A background discussion is provided on networks, modeling, and simulation, followed by an overview of the ANS process control network, its three peak-load models, and the results of a series of coarse-grained simulation studies carried out on these models using implementations of 802.3, 802.4, and 802.5 standard local area networks

  5. Preflight screening techniques for centrifuge-simulated suborbital spaceflight.

    Science.gov (United States)

    Pattarini, James M; Blue, Rebecca S; Castleberry, Tarah L; Vanderploeg, James M

    2014-12-01

    Historically, space has been the venue of the healthy individual. With the advent of commercial spaceflight, we face the novel prospect of routinely exposing spaceflight participants (SPFs) with multiple comorbidities to the space environment. Preflight screening procedures must be developed to identify those individuals at increased risk during flight. We examined the responses of volunteers to centrifuge accelerations mimicking commercial suborbital spaceflight profiles to evaluate how potential SFPs might tolerate such forces. We evaluated our screening process for medical approval of subjects for centrifuge participation for applicability to commercial spaceflight operations. All registered subjects completed a medical questionnaire, physical examination, and electrocardiogram. Subjects with identified concerns including cardiopulmonary disease, hypertension, and diabetes were required to provide documentation of their conditions. There were 335 subjects who registered for the study, 124 who completed all prescreening, and 86 subjects who participated in centrifuge trials. Due to prior medical history, five subjects were disqualified, most commonly for psychiatric reasons or uncontrolled medical conditions. Of the subjects approved, four individuals experienced abnormal physiological responses to centrifuge profiles, including one back strain and three with anxiety reactions. The screening methods used were judged to be sufficient to identify individuals physically capable of tolerating simulated suborbital flight. Improved methods will be needed to identify susceptibility to anxiety reactions. While severe or uncontrolled disease was excluded, many subjects successfully participated in centrifuge trials despite medical histories of disease that are disqualifying under historical spaceflight screening regimes. Such screening techniques are applicable for use in future commercial spaceflight operations.

  6. Development of joining techniques for fabrication of fuel rod simulators

    International Nuclear Information System (INIS)

    Moorhead, A.J.; McCulloch, R.W.; Reed, R.W.; Woodhouse, J.J.

    1980-10-01

    Much of the safety-related thermal-hydraulic tests on nuclear reactors are conducted not in the reactor itself, but in mockup segments of a core that uses resistance-heated fuel rod simulators (FRS) in place of the radioactive fuel rods. Laser welding and furnace brazing techniques are described for joining subassemblies for FRS that have survived up to 1000 h steady-state operation at 700 to 1100 0 C cladding temperatures and over 5000 thermal transients, ranging from 10 to 100 0 C/s. A pulsed-laser welding procedure that includes use of small-diameter filler wire is used to join one end of a resistance heating element of Pt-8 W, Fe-22 Cr-5.5 Al-0.5 Co, or 80 Ni-20 Cr (wt %) to a tubular conductor of an appropriate intermediate material. The other end of the heating element is laser welded to an end plug, which in turn is welded to a central conductor rod

  7. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichiro; Dershowitz, William

    2004-01-01

    During Heisei-15, Golder Associates provided support for JNC Tokai through discrete fracture network data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport, and development of methodologies for analysis of repository site characterization strategies and safety assessment. MIU Underground Rock Laboratory support during H-15 involved development of new discrete fracture network (DFN) models for the MIU Shoba-sama Site, in the region of shaft development. Golder developed three DFN models for the site using discrete fracture network, equivalent porous medium (EPM), and nested DFN/EPM approaches. Each of these models were compared based upon criteria established for the multiple modeling project (MMP). Golder supported JNC participation in Task 6AB, 6D and 6E of the Aespoe Task Force on Modelling of Groundwater Flow and Transport during H-15. For Task 6AB, Golder implemented an updated microstructural model in GoldSim, and used this updated model to simulate the propagation of uncertainty from experimental to safety assessment time scales, for 5 m scale transport path lengths. Task 6D and 6E compared safety assessment (PA) and experimental time scale simulations in a 200 m scale discrete fracture network. For Task 6D, Golder implemented a DFN model using FracMan/PA Works, and determined the sensitivity of solute transport to a range of material property and geometric assumptions. For Task 6E, Golder carried out demonstration FracMan/PA Works transport calculations at a 1 million year time scale, to ensure that task specifications are realistic. The majority of work for Task 6E will be carried out during H-16. During H-15, Golder supported JNC's Total System Performance Assessment (TSPO) strategy by developing technologies for the analysis of precipitant concentration. These approaches were based on the GoldSim precipitant data management features, and were

  8. DC Collection Network Simulation for Offshore Wind Farms

    DEFF Research Database (Denmark)

    Vogel, Stephan; Rasmussen, Tonny Wederberg; El-Khatib, Walid Ziad

    2015-01-01

    The possibility to connect offshore wind turbines with a collection network based on Direct Current (DC), instead of Alternating Current (AC), gained attention in the scientific and industrial environment. There are many promising properties of DC components that could be beneficial such as......: smaller dimensions, less weight, fewer conductors, no reactive power considerations, and less overall losses due to the absence of proximity and skin effects. This work describes a study about the simulation of a Medium Voltage DC (MVDC) grid in an offshore wind farm. Suitable converter concepts...

  9. Design and simulation of a nanoelectronic DG MOSFET current source using artificial neural networks

    International Nuclear Information System (INIS)

    Djeffal, F.; Dibi, Z.; Hafiane, M.L.; Arar, D.

    2007-01-01

    The double gate (DG) MOSFET has received great attention in recent years owing to the inherent suppression of short channel effects (SCEs), excellent subthreshold slope (S), improved drive current (I ds ) and transconductance (gm), volume inversion for symmetric devices and excellent scalability. Therefore, simulation tools which can be applied to design nanoscale transistors in the future require new theory and modeling techniques that capture the physics of quantum transport accurately and efficiently. In this sense, this work presents the applicability of the artificial neural networks (ANN) for the design and simulation of a nanoelectronic DG MOSFET current source. The latter is based on the 2D numerical Non-Equilibrium Green's Function (NEGF) simulation of the current-voltage characteristics of an undoped symmetric DG MOSFET. Our results are discussed in order to obtain some new and useful information about the ULSI technology

  10. Neural Networks Simulation of the Transport of Contaminants in Groundwater

    Directory of Open Access Journals (Sweden)

    Enrico Zio

    2009-12-01

    Full Text Available The performance assessment of an engineered solution for the disposal of radioactive wastes is based on mathematical models of the disposal system response to predefined accidental scenarios, within a probabilistic approach to account for the involved uncertainties. As the most significant potential pathway for the return of radionuclides to the biosphere is groundwater flow, intensive computational efforts are devoted to simulating the behaviour of the groundwater system surrounding the waste deposit, for different values of its hydrogeological parameters and for different evolution scenarios. In this paper, multilayered neural networks are trained to simulate the transport of contaminants in monodimensional and bidimensional aquifers. The results obtained in two case studies indicate that the approximation errors are within the uncertainties which characterize the input data.

  11. Network Flow Simulation of Fluid Transients in Rocket Propulsion Systems

    Science.gov (United States)

    Bandyopadhyay, Alak; Hamill, Brian; Ramachandran, Narayanan; Majumdar, Alok

    2011-01-01

    Fluid transients, also known as water hammer, can have a significant impact on the design and operation of both spacecraft and launch vehicle propulsion systems. These transients often occur at system activation and shutdown. The pressure rise due to sudden opening and closing of valves of propulsion feed lines can cause serious damage during activation and shutdown of propulsion systems. During activation (valve opening) and shutdown (valve closing), pressure surges must be predicted accurately to ensure structural integrity of the propulsion system fluid network. In the current work, a network flow simulation software (Generalized Fluid System Simulation Program) based on Finite Volume Method has been used to predict the pressure surges in the feed line due to both valve closing and valve opening using two separate geometrical configurations. The valve opening pressure surge results are compared with experimental data available in the literature and the numerical results compared very well within reasonable accuracy (< 5%) for a wide range of inlet-to-initial pressure ratios. A Fast Fourier Transform is preformed on the pressure oscillations to predict the various modal frequencies of the pressure wave. The shutdown problem, i.e. valve closing problem, the simulation results are compared with the results of Method of Characteristics. Most rocket engines experience a longitudinal acceleration, known as "pogo" during the later stage of engine burn. In the shutdown example problem, an accumulator has been used in the feed system to demonstrate the "pogo" mitigation effects in the feed system of propellant. The simulation results using GFSSP compared very well with the results of Method of Characteristics.

  12. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    Science.gov (United States)

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  13. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    Science.gov (United States)

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  14. Adverse Outcome Pathway Network Analyses: Techniques and benchmarking the AOPwiki

    Science.gov (United States)

    Abstract: As the community of toxicological researchers, risk assessors, and risk managers adopt the adverse outcome pathway (AOP) paradigm for organizing toxicological knowledge, the number and diversity of adverse outcome pathways and AOP networks are continuing to grow. This ...

  15. Memory Compression Techniques for Network Address Management in MPI

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yanfei; Archer, Charles J.; Blocksome, Michael; Parker, Scott; Bland, Wesley; Raffenetti, Ken; Balaji, Pavan

    2017-05-29

    MPI allows applications to treat processes as a logical collection of integer ranks for each MPI communicator, while internally translating these logical ranks into actual network addresses. In current MPI implementations the management and lookup of such network addresses use memory sizes that are proportional to the number of processes in each communicator. In this paper, we propose a new mechanism, called AV-Rankmap, for managing such translation. AV-Rankmap takes advantage of logical patterns in rank-address mapping that most applications naturally tend to have, and it exploits the fact that some parts of network address structures are naturally more performance critical than others. It uses this information to compress the memory used for network address management. We demonstrate that AV-Rankmap can achieve performance similar to or better than that of other MPI implementations while using significantly less memory.

  16. Numerical study of Free Convective Viscous Dissipative flow along Vertical Cone with Influence of Radiation using Network Simulation method

    Science.gov (United States)

    Kannan, R. M.; Pullepu, Bapuji; Immanuel, Y.

    2018-04-01

    A two dimensional mathematical model is formulated for the transient laminar free convective flow with heat transfer over an incompressible viscous fluid past a vertical cone with uniform surface heat flux with combined effects of viscous dissipation and radiation. The dimensionless boundary layer equations of the flow which are transient, coupled and nonlinear Partial differential equations are solved using the Network Simulation Method (NSM), a powerful numerical technique which demonstrates high efficiency and accuracy by employing the network simulator computer code Pspice. The velocity and temperature profiles have been investigated for various factors, namely viscous dissipation parameter ε, Prandtl number Pr and radiation Rd are analyzed graphically.

  17. Coarse-graining stochastic biochemical networks: adiabaticity and fast simulations

    Energy Technology Data Exchange (ETDEWEB)

    Nemenman, Ilya [Los Alamos National Laboratory; Sinitsyn, Nikolai [Los Alamos National Laboratory; Hengartner, Nick [Los Alamos National Laboratory

    2008-01-01

    We propose a universal approach for analysis and fast simulations of stiff stochastic biochemical kinetics networks, which rests on elimination of fast chemical species without a loss of information about mesoscoplc, non-Poissonian fluctuations of the slow ones. Our approach, which is similar to the Born-Oppenhelmer approximation in quantum mechanics, follows from the stochastic path Integral representation of the cumulant generating function of reaction events. In applications with a small number of chemIcal reactions, It produces analytical expressions for cumulants of chemical fluxes between the slow variables. This allows for a low-dimensional, Interpretable representation and can be used for coarse-grained numerical simulation schemes with a small computational complexity and yet high accuracy. As an example, we derive the coarse-grained description for a chain of biochemical reactions, and show that the coarse-grained and the microscopic simulations are in an agreement, but the coarse-gralned simulations are three orders of magnitude faster.

  18. Simulating Real-Time Aspects of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Christian Nastasi

    2010-01-01

    Full Text Available Wireless Sensor Networks (WSNs technology has been mainly used in the applications with low-frequency sampling and little computational complexity. Recently, new classes of WSN-based applications with different characteristics are being considered, including process control, industrial automation and visual surveillance. Such new applications usually involve relatively heavy computations and also present real-time requirements as bounded end-to- end delay and guaranteed Quality of Service. It becomes then necessary to employ proper resource management policies, not only for communication resources but also jointly for computing resources, in the design and development of such WSN-based applications. In this context, simulation can play a critical role, together with analytical models, for validating a system design against the parameters of Quality of Service demanded for. In this paper, we present RTNS, a publicly available free simulation tool which includes Operating System aspects in wireless distributed applications. RTNS extends the well-known NS-2 simulator with models of the CPU, the Real-Time Operating System and the application tasks, to take into account delays due to the computation in addition to the communication. We demonstrate the benefits of RTNS by presenting our simulation study for a complex WSN-based multi-view vision system for real-time event detection.

  19. A Technique for Presenting a Deceptive Dynamic Network Topology

    Science.gov (United States)

    2013-03-01

    more complicated network topologies in our experiments. We used a Watts- Strogatz [35] model to generate a synthetic topology for experimentation due...generated Watts- Strogatz model except for the intelligent router and the web server. The actual router used does not impact the results of our experiment...library for the Python [43] programming language. NetworkX provides two features useful for this experiment. It was used to generate a Watts- Strogatz model

  20. STEADY-STATE modeling and simulation of pipeline networks for compressible fluids

    Directory of Open Access Journals (Sweden)

    A.L.H. Costa

    1998-12-01

    Full Text Available This paper presents a model and an algorithm for the simulation of pipeline networks with compressible fluids. The model can predict pressures, flow rates, temperatures and gas compositions at any point of the network. Any network configuration can be simulated; the existence of cycles is not an obstacle. Numerical results from simulated data on a proposed network are shown for illustration. The potential of the simulator is explored by the analysis of a pressure relief network, using a stochastic procedure for the evaluation of system performance.

  1. Intrusion detection techniques for plant-wide network in a nuclear power plant

    International Nuclear Information System (INIS)

    Rajasekhar, P.; Shrikhande, S.V.; Biswas, B.B.; Patil, R.K.

    2012-01-01

    Nuclear power plants have a lot of critical data to be sent to the operator workstations. A plant wide integrated communication network, with high throughput, determinism and redundancy, is required between the workstations and the field. Switched Ethernet network is a promising prospect for such an integrated communication network. But for such an integrated system, intrusion is a major issue. Hence the network should have an intrusion detection system to make the network data secure and enhance the network availability. Intrusion detection is the process of monitoring the events occurring in a network and analyzing them for signs of possible incidents, which are violations or imminent threats of violation of network security policies, acceptable user policies, or standard security practices. This paper states the various intrusion detection techniques and approaches which are applicable for analysis of a plant wide network. (author)

  2. Wireless Power Transfer Protocols in Sensor Networks: Experiments and Simulations

    Directory of Open Access Journals (Sweden)

    Sotiris Nikoletseas

    2017-04-01

    Full Text Available Rapid technological advances in the domain of Wireless Power Transfer pave the way for novel methods for power management in systems of wireless devices, and recent research works have already started considering algorithmic solutions for tackling emerging problems. In this paper, we investigate the problem of efficient and balanced Wireless Power Transfer in Wireless Sensor Networks. We employ wireless chargers that replenish the energy of network nodes. We propose two protocols that configure the activity of the chargers. One protocol performs wireless charging focused on the charging efficiency, while the other aims at proper balance of the chargers’ residual energy. We conduct detailed experiments using real devices and we validate the experimental results via larger scale simulations. We observe that, in both the experimental evaluation and the evaluation through detailed simulations, both protocols achieve their main goals. The Charging Oriented protocol achieves good charging efficiency throughout the experiment, while the Energy Balancing protocol achieves a uniform distribution of energy within the chargers.

  3. New approach for simulating groundwater flow in discrete fracture network

    Science.gov (United States)

    Fang, H.; Zhu, J.

    2017-12-01

    In this study, we develop a new approach to calculate groundwater flowrate and hydraulic head distribution in two-dimensional discrete fracture network (DFN) where both laminar and turbulent flows co-exist in individual fractures. The cubic law is used to calculate hydraulic head distribution and flow behaviors in fractures where flow is laminar, while the Forchheimer's law is used to quantify turbulent flow behaviors. Reynolds number is used to distinguish flow characteristics in individual fractures. The combination of linear and non-linear equations is solved iteratively to determine flowrates in all fractures and hydraulic heads at all intersections. We examine potential errors in both flowrate and hydraulic head from the approach of uniform flow assumption. Applying the cubic law in all fractures regardless of actual flow conditions overestimates the flowrate when turbulent flow may exist while applying the Forchheimer's law indiscriminately underestimate the flowrate when laminar flows exist in the network. The contrast of apertures of large and small fractures in the DFN has significant impact on the potential errors of using only the cubic law or the Forchheimer's law. Both the cubic law and Forchheimer's law simulate similar hydraulic head distributions as the main difference between these two approaches lies in predicting different flowrates. Fracture irregularity does not significantly affect the potential errors from using only the cubic law or the Forchheimer's law if network configuration remains similar. Relative density of fractures does not significantly affect the relative performance of the cubic law and Forchheimer's law.

  4. Validating module network learning algorithms using simulated data.

    Science.gov (United States)

    Michoel, Tom; Maere, Steven; Bonnet, Eric; Joshi, Anagha; Saeys, Yvan; Van den Bulcke, Tim; Van Leemput, Koenraad; van Remortel, Piet; Kuiper, Martin; Marchal, Kathleen; Van de Peer, Yves

    2007-05-03

    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Despite the demonstrated success of such algorithms in uncovering biologically relevant regulatory relations, further developments in the area are hampered by a lack of tools to compare the performance of alternative module network learning strategies. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators. We show that data simulators such as SynTReN are very well suited for the purpose of developing, testing and improving module network

  5. Emulation of reionization simulations for Bayesian inference of astrophysics parameters using neural networks

    Science.gov (United States)

    Schmit, C. J.; Pritchard, J. R.

    2018-03-01

    Next generation radio experiments such as LOFAR, HERA, and SKA are expected to probe the Epoch of Reionization (EoR) and claim a first direct detection of the cosmic 21cm signal within the next decade. Data volumes will be enormous and can thus potentially revolutionize our understanding of the early Universe and galaxy formation. However, numerical modelling of the EoR can be prohibitively expensive for Bayesian parameter inference and how to optimally extract information from incoming data is currently unclear. Emulation techniques for fast model evaluations have recently been proposed as a way to bypass costly simulations. We consider the use of artificial neural networks as a blind emulation technique. We study the impact of training duration and training set size on the quality of the network prediction and the resulting best-fitting values of a parameter search. A direct comparison is drawn between our emulation technique and an equivalent analysis using 21CMMC. We find good predictive capabilities of our network using training sets of as low as 100 model evaluations, which is within the capabilities of fully numerical radiative transfer codes.

  6. Efficient Allocation of Resources for Defense of Spatially Distributed Networks Using Agent-Based Simulation.

    Science.gov (United States)

    Kroshl, William M; Sarkani, Shahram; Mazzuchi, Thomas A

    2015-09-01

    This article presents ongoing research that focuses on efficient allocation of defense resources to minimize the damage inflicted on a spatially distributed physical network such as a pipeline, water system, or power distribution system from an attack by an active adversary, recognizing the fundamental difference between preparing for natural disasters such as hurricanes, earthquakes, or even accidental systems failures and the problem of allocating resources to defend against an opponent who is aware of, and anticipating, the defender's efforts to mitigate the threat. Our approach is to utilize a combination of integer programming and agent-based modeling to allocate the defensive resources. We conceptualize the problem as a Stackelberg "leader follower" game where the defender first places his assets to defend key areas of the network, and the attacker then seeks to inflict the maximum damage possible within the constraints of resources and network structure. The criticality of arcs in the network is estimated by a deterministic network interdiction formulation, which then informs an evolutionary agent-based simulation. The evolutionary agent-based simulation is used to determine the allocation of resources for attackers and defenders that results in evolutionary stable strategies, where actions by either side alone cannot increase its share of victories. We demonstrate these techniques on an example network, comparing the evolutionary agent-based results to a more traditional, probabilistic risk analysis (PRA) approach. Our results show that the agent-based approach results in a greater percentage of defender victories than does the PRA-based approach. © 2015 Society for Risk Analysis.

  7. Social Structure Simulation and Inference Using Artificial Intelligence Techniques

    National Research Council Canada - National Science Library

    Tsvetovat, Maksim

    2005-01-01

    .... As available computing power grew, social network-based models have become not only an analysis tool, but also a methodology for building new theories of social behaviour and organizational evolution...

  8. Developing Visualization Techniques for Semantics-based Information Networks

    Science.gov (United States)

    Keller, Richard M.; Hall, David R.

    2003-01-01

    Information systems incorporating complex network structured information spaces with a semantic underpinning - such as hypermedia networks, semantic networks, topic maps, and concept maps - are being deployed to solve some of NASA s critical information management problems. This paper describes some of the human interaction and navigation problems associated with complex semantic information spaces and describes a set of new visual interface approaches to address these problems. A key strategy is to leverage semantic knowledge represented within these information spaces to construct abstractions and views that will be meaningful to the human user. Human-computer interaction methodologies will guide the development and evaluation of these approaches, which will benefit deployed NASA systems and also apply to information systems based on the emerging Semantic Web.

  9. Data mining techniques in sensor networks summarization, interpolation and surveillance

    CERN Document Server

    Appice, Annalisa; Fumarola, Fabio; Malerba, Donato

    2013-01-01

    Sensor networks comprise of a number of sensors installed across a spatially distributed network, which gather information and periodically feed a central server with the measured data. The server monitors the data, issues possible alarms and computes fast aggregates. As data analysis requests may concern both present and past data, the server is forced to store the entire stream. But the limited storage capacity of a server may reduce the amount of data stored on the disk. One solution is to compute summaries of the data as it arrives, and to use these summaries to interpolate the real data.

  10. Simulation and prediction for energy dissipaters and stilling basins design using artificial intelligence technique

    Directory of Open Access Journals (Sweden)

    Mostafa Ahmed Moawad Abdeen

    2015-12-01

    Full Text Available Water with large velocities can cause considerable damage to channels whose beds are composed of natural earth materials. Several stilling basins and energy dissipating devices have been designed in conjunction with spillways and outlet works to avoid damages in canals’ structures. In addition, lots of experimental and traditional mathematical numerical works have been performed to profoundly investigate the accurate design of these stilling basins and energy dissipaters. The current study is aimed toward introducing the artificial intelligence technique as new modeling tool in the prediction of the accurate design of stilling basins. Specifically, artificial neural networks (ANNs are utilized in the current study in conjunction with experimental data to predict the length of the hydraulic jumps occurred in spillways and consequently the stilling basin dimensions can be designed for adequate energy dissipation. The current study showed, in a detailed fashion, the development process of different ANN models to accurately predict the hydraulic jump lengths acquired from different experimental studies. The results obtained from implementing these models showed that ANN technique was very successful in simulating the hydraulic jump characteristics occurred in stilling basins. Therefore, it can be safely utilized in the design of these basins as ANN involves minimum computational and financial efforts and requirements compared with experimental work and traditional numerical techniques such as finite difference or finite elements.

  11. Future view of electric power information processing techniques. Architecture techniques for power supply communication network

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Keisuke

    1988-06-20

    Present situations of a power supply communication are described, and the future trend of a power supply information network is reviewed. For the improvement of a transmission efficiency and quality and a cost benefit for the power supply communication, the introduction of digital networks has been promoted. As for a protection information network, since there is the difference between a required communication quality of system protection information and that of power supply operation information, the individual digital network configuration is expected, in addition, the increasing of image information transmission for monitoring is also estimated. As for a business information network, the construction of a broad-band switched network is expected with increasing of image transmission needs such as a television meeting. Furthermore, the expansion to a power supply ISDN which is possible to connect between a telephone, facsimile and data terminal, to exchange various media and to connect between networks is expected with higher communication services in the protection and business network. However, for its practical use, the standardization of various interfaces will become essential. (3 figs, 1 tab)

  12. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro

    2016-01-01

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  13. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2016-07-07

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  14. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  15. Determine the feasibility of techniques for simulating coal dust explosions

    CSIR Research Space (South Africa)

    Kirsten, JT

    1994-07-01

    Full Text Available The primary objective of this work is to assess the feasibility of reliably simulating the coal dust explosion process taking place in the Kloppersbos tunnel with a computer model. Secondary objectives are to investigate the viability of simulating...

  16. Combined techniques for network measurements at accelerator facilities

    International Nuclear Information System (INIS)

    Pschorn, I.

    1999-01-01

    Usually network measurements at GSi (Gesellschaft fur Schwerionen forschung) are carried out by employing the Leica tachymeter TC2002K etc. Due to time constraints and the fact that GSi possesses only one of these selected, high precision total-stations, it was suddenly necessary to think about employing a Laser tracker as the major instrument for a reference network measurement. The idea was to compare the different instruments and to proof if it is possible at all to carry out a precise network measurement using a laser tracker. In the end the SMX Tracker4500 combined with Leica NA3000 for network measurements at GSi, Darmstadt and at BESSY Il, Berlin (both located in Germany) was applied. A few results are shown in the following chapters. A new technology in 3D metrology came up. Some ideas of applying these new tools in the field of accelerator measurements are given. Finally aspects of calibration and checking the performance of the employed high precision instrument are pointed out in this paper. (author)

  17. Address autoconfiguration in wireless ad hoc networks : Protocols and techniques

    NARCIS (Netherlands)

    Cempaka Wangi, N.I.; Prasad, R.V.; Jacobsson, M.; Niemegeers, I.

    2008-01-01

    With the advent of smaller devices having higher computational capacity and wireless communication capabilities, the world is becoming completely networked. Although, the mobile nature of these devices provides ubiquitous services, it also poses many challenges. In this article, we look in depth at

  18. Evaluating Automatic Pools Distribution Techniques for Self-Configured Networks

    NARCIS (Netherlands)

    Gomes, Reinaldo; de O. Schmidt, Ricardo

    NextGeneration of Networks (NGN) is one of the most important research topics of the last decade. Current Internet is not capable of supporting new users and operators’ demands and a new structure will be necessary to them. In this context many solutions might be necessary: from architectural

  19. Knapsack--TOPSIS Technique for Vertical Handover in Heterogeneous Wireless Network.

    Directory of Open Access Journals (Sweden)

    E M Malathy

    Full Text Available In a heterogeneous wireless network, handover techniques are designed to facilitate anywhere/anytime service continuity for mobile users. Consistent best-possible access to a network with widely varying network characteristics requires seamless mobility management techniques. Hence, the vertical handover process imposes important technical challenges. Handover decisions are triggered for continuous connectivity of mobile terminals. However, bad network selection and overload conditions in the chosen network can cause fallout in the form of handover failure. In order to maintain the required Quality of Service during the handover process, decision algorithms should incorporate intelligent techniques. In this paper, a new and efficient vertical handover mechanism is implemented using a dynamic programming method from the operation research discipline. This dynamic programming approach, which is integrated with the Technique to Order Preference by Similarity to Ideal Solution (TOPSIS method, provides the mobile user with the best handover decisions. Moreover, in this proposed handover algorithm a deterministic approach which divides the network into zones is incorporated into the network server in order to derive an optimal solution. The study revealed that this method is found to achieve better performance and QoS support to users and greatly reduce the handover failures when compared to the traditional TOPSIS method. The decision arrived at the zone gateway using this operational research analytical method (known as the dynamic programming knapsack approach together with Technique to Order Preference by Similarity to Ideal Solution yields remarkably better results in terms of the network performance measures such as throughput and delay.

  20. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2017-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  1. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2018-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  2. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  3. Imaging Simulations for the Korean VLBI Network (KVN

    Directory of Open Access Journals (Sweden)

    Tae-Hyun Jung

    2005-03-01

    Full Text Available The Korean VLBI Network (KVN will open a new field of research in astronomy, geodesy and earth science using the newest three 21m radio telescopes. This will expand our ability to look at the Universe in the millimeter regime. Imaging capability of radio interferometry is highly dependent upon the antenna configuration, source size, declination and the shape of target. In this paper, imaging simulations are carried out with the KVN system configuration. Five test images were used which were a point source, multi-point sources, a uniform sphere with two different sizes compared to the synthesis beam of the KVN and a Very Large Array (VLA image of Cygnus A. The declination for the full time simulation was set as +60 degrees and the observation time range was --6 to +6 hours around transit. Simulations have been done at 22GHz, one of the KVN observation frequency. All these simulations and data reductions have been run with the Astronomical Image Processing System (AIPS software package. As the KVN array has a resolution of about 6 mas (milli arcsecond at 22GHz, in case of model source being approximately the beam size or smaller, the ratio of peak intensity over RMS shows about 10000:1 and 5000:1. The other case in which model source is larger than the beam size, this ratio shows very low range of about 115:1 and 34:1. This is due to the lack of short baselines and the small number of antenna. We compare the coordinates of the model images with those of the cleaned images. The result shows mostly perfect correspondence except in the case of the 12mas uniform sphere. Therefore, the main astronomical targets for the KVN will be the compact sources and the KVN will have an excellent performance in the astrometry for these sources.

  4. QoS Provisioning Techniques for Future Fiber-Wireless (FiWi Access Networks

    Directory of Open Access Journals (Sweden)

    Martin Maier

    2010-04-01

    Full Text Available A plethora of enabling optical and wireless access-metro network technologies have been emerging that can be used to build future-proof bimodal fiber-wireless (FiWi networks. Hybrid FiWi networks aim at providing wired and wireless quad-play services over the same infrastructure simultaneously and hold great promise to mitigate the digital divide and change the way we live and work by replacing commuting with teleworking. After overviewing enabling optical and wireless network technologies and their QoS provisioning techniques, we elaborate on enabling radio-over-fiber (RoF and radio-and-fiber (R&F technologies. We describe and investigate new QoS provisioning techniques for future FiWi networks, ranging from traffic class mapping, scheduling, and resource management to advanced aggregation techniques, congestion control, and layer-2 path selection algorithms.

  5. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    International Nuclear Information System (INIS)

    Li, Ming; Kang, Zhan; Huang, Xiaobo

    2015-01-01

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials

  6. A Simulation of AI Programming Techniques in BASIC.

    Science.gov (United States)

    Mandell, Alan

    1986-01-01

    Explains the functions of and the techniques employed in expert systems. Offers the program "The Periodic Table Expert," as a model for using artificial intelligence techniques in BASIC. Includes the program listing and directions for its use on: Tandy 1000, 1200, and 2000; IBM PC; PC Jr; TRS-80; and Apple computers. (ML)

  7. Computer simulation of the Blumlein pulse forming network

    International Nuclear Information System (INIS)

    Edwards, C.B.

    1981-03-01

    A computer simulation of the Blumlein pulse-forming network is described. The model is able to treat the case of time varying loads, non-zero conductor resistance, and switch closure effects as exhibited by real systems employing non-ohmic loads such as field-emission vacuum diodes in which the impedance is strongly time and voltage dependent. The application of the code to various experimental arrangements is discussed, with particular reference to the prediction of the behaviour of the output circuit of 'ELF', the electron beam generator in operation at the Rutherford Laboratory. The output from the code is compared directly with experimentally obtained voltage waveforms applied to the 'ELF' diode. (author)

  8. 360-degree videos: a new visualization technique for astrophysical simulations

    Science.gov (United States)

    Russell, Christopher M. P.

    2017-11-01

    360-degree videos are a new type of movie that renders over all 4π steradian. Video sharing sites such as YouTube now allow this unique content to be shared via virtual reality (VR) goggles, hand-held smartphones/tablets, and computers. Creating 360° videos from astrophysical simulations is not only a new way to view these simulations as you are immersed in them, but is also a way to create engaging content for outreach to the public. We present what we believe is the first 360° video of an astrophysical simulation: a hydrodynamics calculation of the central parsec of the Galactic centre. We also describe how to create such movies, and briefly comment on what new science can be extracted from astrophysical simulations using 360° videos.

  9. Monte Carlo simulation of tomography techniques using the platform Gate

    International Nuclear Information System (INIS)

    Barbouchi, Asma

    2007-01-01

    Simulations play a key role in functional imaging, with applications ranging from scanner design, scatter correction, protocol optimisation. GATE (Geant4 for Application Tomography Emission) is a platform for Monte Carlo Simulation. It is based on Geant4 to generate and track particles, to model geometry and physics process. Explicit modelling of time includes detector motion, time of flight, tracer kinetics. Interfaces to voxellised models and image reconstruction packages improve the integration of GATE in the global modelling cycle. In this work Monte Carlo simulations are used to understand and optimise the gamma camera's performances. We study the effect of the distance between source and collimator, the diameter of the holes and the thick of the collimator on the spatial resolution, energy resolution and efficiency of the gamma camera. We also study the reduction of simulation's time and implement a model of left ventricle in GATE. (Author). 7 refs

  10. Swarm intelligence techniques for optimization and management tasks insensor networks

    OpenAIRE

    Hernández Pibernat, Hugo

    2012-01-01

    Premi extraordinari doctorat curs 2011-2012, àmbit Enginyeria de les TIC The main contributions of this thesis are located in the domain of wireless sensor netorks. More in detail, we introduce energyaware algorithms and protocols in the context of the following topics: self-synchronized duty-cycling in networks with energy harvesting capabilities, distributed graph coloring and minimum energy broadcasting with realistic antennas. In the following, we review the research conducted...

  11. Validation techniques of agent based modelling for geospatial simulations

    OpenAIRE

    Darvishi, M.; Ahmadi, G.

    2014-01-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent...

  12. Evaluation of Techniques to Detect Significant Network Performance Problems using End-to-End Active Network Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cottrell, R.Les; Logg, Connie; Chhaparia, Mahesh; /SLAC; Grigoriev, Maxim; /Fermilab; Haro, Felipe; /Chile U., Catolica; Nazir, Fawad; /NUST, Rawalpindi; Sandford, Mark

    2006-01-25

    End-to-End fault and performance problems detection in wide area production networks is becoming increasingly hard as the complexity of the paths, the diversity of the performance, and dependency on the network increase. Several monitoring infrastructures are built to monitor different network metrics and collect monitoring information from thousands of hosts around the globe. Typically there are hundreds to thousands of time-series plots of network metrics which need to be looked at to identify network performance problems or anomalous variations in the traffic. Furthermore, most commercial products rely on a comparison with user configured static thresholds and often require access to SNMP-MIB information, to which a typical end-user does not usually have access. In our paper we propose new techniques to detect network performance problems proactively in close to realtime and we do not rely on static thresholds and SNMP-MIB information. We describe and compare the use of several different algorithms that we have implemented to detect persistent network problems using anomalous variations analysis in real end-to-end Internet performance measurements. We also provide methods and/or guidance for how to set the user settable parameters. The measurements are based on active probes running on 40 production network paths with bottlenecks varying from 0.5Mbits/s to 1000Mbit/s. For well behaved data (no missed measurements and no very large outliers) with small seasonal changes most algorithms identify similar events. We compare the algorithms' robustness with respect to false positives and missed events especially when there are large seasonal effects in the data. Our proposed techniques cover a wide variety of network paths and traffic patterns. We also discuss the applicability of the algorithms in terms of their intuitiveness, their speed of execution as implemented, and areas of applicability. Our encouraging results compare and evaluate the accuracy of our

  13. Sensorless Speed/Torque Control of DC Machine Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    Rakan Kh. Antar

    2017-12-01

    Full Text Available In this paper, Artificial Neural Network (ANN technique is implemented to improve speed and torque control of a separately excited DC machine drive. The speed and torque sensorless scheme based on ANN is estimated adaptively. The proposed controller is designed to estimate rotor speed and mechanical load torque as a Model Reference Adaptive System (MRAS method for DC machine. The DC drive system consists of four quadrant DC/DC chopper with MOSFET transistors, ANN, logic gates and routing circuits. The DC drive circuit is designed, evaluated and modeled by Matlab/Simulink in the forward and reverse operation modes as a motor and generator, respectively. The DC drive system is simulated at different speed values (±1200 rpm and mechanical torque (±7 N.m in steady state and dynamic conditions. The simulation results illustratethe effectiveness of the proposed controller without speed or torque sensors.

  14. Projecting impacts of climate change on water availability using artificial neural network techniques

    Science.gov (United States)

    Swain, Eric D.; Gomez-Fragoso, Julieta; Torres-Gonzalez, Sigfredo

    2017-01-01

    Lago Loíza reservoir in east-central Puerto Rico is one of the primary sources of public water supply for the San Juan metropolitan area. To evaluate and predict the Lago Loíza water budget, an artificial neural network (ANN) technique is trained to predict river inflows. A method is developed to combine ANN-predicted daily flows with ANN-predicted 30-day cumulative flows to improve flow estimates. The ANN application trains well for representing 2007–2012 and the drier 1994–1997 periods. Rainfall data downscaled from global circulation model (GCM) simulations are used to predict 2050–2055 conditions. Evapotranspiration is estimated with the Hargreaves equation using minimum and maximum air temperatures from the downscaled GCM data. These simulated 2050–2055 river flows are input to a water budget formulation for the Lago Loíza reservoir for comparison with 2007–2012. The ANN scenarios require far less computational effort than a numerical model application, yet produce results with sufficient accuracy to evaluate and compare hydrologic scenarios. This hydrologic tool will be useful for future evaluations of the Lago Loíza reservoir and water supply to the San Juan metropolitan area.

  15. Enamel dose calculation by electron paramagnetic resonance spectral simulation technique

    International Nuclear Information System (INIS)

    Dong Guofu; Cong Jianbo; Guo Linchao; Ning Jing; Xian Hong; Wang Changzhen; Wu Ke

    2011-01-01

    Objective: To optimize the enamel electron paramagnetic resonance (EPR) spectral processing by using the EPR spectral simulation method to improve the accuracy of enamel EPR dosimetry and reduce artificial error. Methods: The multi-component superimposed EPR powder spectral simulation software was developed to simulate EPR spectrum models of the background signal (BS) and the radiation- induced signal (RS) of irradiated enamel respectively. RS was extracted from the multi-component superimposed spectrum of irradiated enamel and its amplitude was calculated. The dose-response curve was then established for calculating the doses of a group of enamel samples. The result of estimated dose was compared with that calculated by traditional method. Results: BS was simulated as a powder spectrum of gaussian line shape with the following spectrum parameters: g=2.00 35 and Hpp=0.65-1.1 mT, RS signal was also simulated as a powder spectrum but with axi-symmetric spectrum characteristics. The spectrum parameters of RS were: g ⊥ =2.0018, g ‖ =1.9965, Hpp=0.335-0.4 mT. The amplitude of RS had a linear response to radiation dose with the regression equation as y=240.74x + 76 724 (R 2 =0.9947). The expectation of relative error of dose estimation was 0.13. Conclusions: EPR simulation method has improved somehow the accuracy and reliability of enamel EPR dose estimation. (authors)

  16. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com [Centre of Preparatory and General Studies, TATI University College, 24000 Kemaman, Terengganu, Malaysia and Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Yusof, Fadhilah, E-mail: fadhilahy@utm.my [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Daud, Zalina Mohd, E-mail: zalina@ic.utm.my [UTM Razak School of Engineering and Advanced Technology, Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia); Yusop, Zulkifli, E-mail: zulyusop@utm.my [Institute of Environmental and Water Resource Management (IPASA), Faculty of Civil Engineering, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Kasno, Mohammad Afif, E-mail: mafifkasno@gmail.com [Malaysia - Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia)

    2015-02-03

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.

  17. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    International Nuclear Information System (INIS)

    Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif

    2015-01-01

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system

  18. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    Science.gov (United States)

    Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif

    2015-02-01

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.

  19. Review Of Prevention Techniques For Denial Of Service DOS Attacks In Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Poonam Rolla

    2015-08-01

    Full Text Available Wireless Sensor Networks comprised of several tiny sensor nodes which are densely deployed over the region to monitor the environmental conditions. These sensor nodes have certain design issues out of which security is the main predominant factor as it effects the whole lifetime of network. DDoS Distributed denial of service attack floods unnecessary packets in the sensor network. A review on DDoS attacks and their prevention techniques have been done in this paper.

  20. Model and simulation of Krause model in dynamic open network

    Science.gov (United States)

    Zhu, Meixia; Xie, Guangqiang

    2017-08-01

    The construction of the concept of evolution is an effective way to reveal the formation of group consensus. This study is based on the modeling paradigm of the HK model (Hegsekmann-Krause). This paper analyzes the evolution of multi - agent opinion in dynamic open networks with member mobility. The results of the simulation show that when the number of agents is constant, the interval distribution of the initial distribution will affect the number of the final view, The greater the distribution of opinions, the more the number of views formed eventually; The trust threshold has a decisive effect on the number of views, and there is a negative correlation between the trust threshold and the number of opinions clusters. The higher the connectivity of the initial activity group, the more easily the subjective opinion in the evolution of opinion to achieve rapid convergence. The more open the network is more conducive to the unity of view, increase and reduce the number of agents will not affect the consistency of the group effect, but not conducive to stability.

  1. Simulation of an image network in a medical image information system

    International Nuclear Information System (INIS)

    Massar, A.D.A.; De Valk, J.P.J.; Reijns, G.L.; Bakker, A.R.

    1985-01-01

    The desirability of an integrated (digital) communication system for medical images is widely accepted. In the USA and in Europe several experimental projects are in progress to realize (a part of) such a system. Among these is the IMAGIS project in the Netherlands. From the conclusions of the preliminary studies performed, some requirements can be formulated such a system should meet in order to be accepted by its users. For example, the storage resolution of the images should match the maximum resolution of the presently acquired digital images. This determines the amount of data and therefore the storage requirements. Further, the desired images should be there when needed. This time constraint determines the speed requirements to be imposed on the system. As compared to current standards, very large storage capacities and very fast communication media are needed to meet these requirements. By employing cacheing techniques and suitable data compression schemes for the storage and by carefully choosing the network protocols, bare capacity demands can be alleviated. A communication network is needed to make the imaging system available over a larger area. As the network is very likely to become a major bottleneck for system performance, effects of variation of various attributes have to be carefully studied and analysed. After interesting results had been obtained (although preliminary) using a simulation model for a layered storage structure, it was decided to apply simulation also to this problem. Effects of network topology, access protocols and buffering strategies will be tested. Changes in performance resulting from changes in various network parameters will be studied. Results of this study at its present state are presented

  2. Dynamical properties of fractal networks: Scaling, numerical simulations, and physical realizations

    International Nuclear Information System (INIS)

    Nakayama, T.; Yakubo, K.; Orbach, R.L.

    1994-01-01

    This article describes the advances that have been made over the past ten years on the problem of fracton excitations in fractal structures. The relevant systems to this subject are so numerous that focus is limited to a specific structure, the percolating network. Recent progress has followed three directions: scaling, numerical simulations, and experiment. In a happy coincidence, large-scale computations, especially those involving array processors, have become possible in recent years. Experimental techniques such as light- and neutron-scattering experiments have also been developed. Together, they form the basis for a review article useful as a guide to understanding these developments and for charting future research directions. In addition, new numerical simulation results for the dynamical properties of diluted antiferromagnets are presented and interpreted in terms of scaling arguments. The authors hope this article will bring the major advances and future issues facing this field into clearer focus, and will stimulate further research on the dynamical properties of random systems

  3. Auditing information structures in organizations: A review of data collection techniques for network analysis

    NARCIS (Netherlands)

    Koning, K.H.; de Jong, Menno D.T.

    2005-01-01

    Network analysis is one of the current techniques for investigating organizational communication. Despite the amount of how-to literature about using network analysis to assess information flows and relationships in organizations, little is known about the methodological strengths and weaknesses of

  4. Social Learning Network Analysis Model to Identify Learning Patterns Using Ontology Clustering Techniques and Meaningful Learning

    Science.gov (United States)

    Firdausiah Mansur, Andi Besse; Yusof, Norazah

    2013-01-01

    Clustering on Social Learning Network still not explored widely, especially when the network focuses on e-learning system. Any conventional methods are not really suitable for the e-learning data. SNA requires content analysis, which involves human intervention and need to be carried out manually. Some of the previous clustering techniques need…

  5. Novel anti-jamming technique for OCDMA network through FWM in SOA based wavelength converter

    Science.gov (United States)

    Jyoti, Vishav; Kaler, R. S.

    2013-06-01

    In this paper, we propose a novel anti-jamming technique for optical code division multiple access (OCDMA) network through four wave mixing (FWM) in semiconductor optical amplifier (SOA) based wavelength converter. OCDMA signal can be easily jammed with high power jamming signal. It is shown that wavelength conversion through four wave mixing in SOA has improved capability of jamming resistance. It is observed that jammer has no effect on OCDMA network even at high jamming powers by using the proposed technique.

  6. A signal combining technique based on channel shortening for cooperative sensor networks

    KAUST Repository

    Hussain, Syed Imtiaz; Alouini, Mohamed-Slim; Hasna, Mazen Omar

    2010-01-01

    The cooperative relaying process needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems, e.g. wireless sensor networks where the nodes are equipped with very basic communication hardware. In this paper, we consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination can capture the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. ©2010 IEEE.

  7. A signal combining technique based on channel shortening for cooperative sensor networks

    KAUST Repository

    Hussain, Syed Imtiaz

    2010-06-01

    The cooperative relaying process needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems, e.g. wireless sensor networks where the nodes are equipped with very basic communication hardware. In this paper, we consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination can capture the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. ©2010 IEEE.

  8. Combining neural networks and signed particles to simulate quantum systems more efficiently

    Science.gov (United States)

    Sellier, Jean Michel

    2018-04-01

    Recently a new formulation of quantum mechanics has been suggested which describes systems by means of ensembles of classical particles provided with a sign. This novel approach mainly consists of two steps: the computation of the Wigner kernel, a multi-dimensional function describing the effects of the potential over the system, and the field-less evolution of the particles which eventually create new signed particles in the process. Although this method has proved to be extremely advantageous in terms of computational resources - as a matter of fact it is able to simulate in a time-dependent fashion many-body systems on relatively small machines - the Wigner kernel can represent the bottleneck of simulations of certain systems. Moreover, storing the kernel can be another issue as the amount of memory needed is cursed by the dimensionality of the system. In this work, we introduce a new technique which drastically reduces the computation time and memory requirement to simulate time-dependent quantum systems which is based on the use of an appropriately tailored neural network combined with the signed particle formalism. In particular, the suggested neural network is able to compute efficiently and reliably the Wigner kernel without any training as its entire set of weights and biases is specified by analytical formulas. As a consequence, the amount of memory for quantum simulations radically drops since the kernel does not need to be stored anymore as it is now computed by the neural network itself, only on the cells of the (discretized) phase-space which are occupied by particles. As its is clearly shown in the final part of this paper, not only this novel approach drastically reduces the computational time, it also remains accurate. The author believes this work opens the way towards effective design of quantum devices, with incredible practical implications.

  9. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  10. Simulation technologies in networking and communications selecting the best tool for the test

    CERN Document Server

    Pathan, Al-Sakib Khan; Khan, Shafiullah

    2014-01-01

    Simulation is a widely used mechanism for validating the theoretical models of networking and communication systems. Although the claims made based on simulations are considered to be reliable, how reliable they really are is best determined with real-world implementation trials.Simulation Technologies in Networking and Communications: Selecting the Best Tool for the Test addresses the spectrum of issues regarding the different mechanisms related to simulation technologies in networking and communications fields. Focusing on the practice of simulation testing instead of the theory, it presents

  11. OpenFlow Switching Performance using Network Simulator - 3

    OpenAIRE

    Sriram Prashanth, Naguru

    2016-01-01

    Context. In the present network inventive world, there is a quick expansion of switches and protocols, which are used to cope up with the increase in customer requirement in the networking. With increasing demand for higher bandwidths and lower latency and to meet these requirements new network paths are introduced. To reduce network load in present switching network, development of new innovative switching is required. These required results can be achieved by Software Define Network or Trad...

  12. Simulation techniques for determining reliability and availability of technical systems

    International Nuclear Information System (INIS)

    Lindauer, E.

    1975-01-01

    The system is described in the form of a fault tree with components representing part functions of the system and connections which reproduce the logical structure of the system. Both have the states intact or failed, they are defined here as in the programme FESIVAR of the IRS. For the simulation of components corresponding to the given probabilities, pseudo-random numbers are applied; these are numbers whose sequence is determined by the producing algorithm, but which for the given purpose sufficiently exhibit the behaviour of randomly successive numbers. This method of simulation is compared with deterministic methods. (HP/LH) [de

  13. Development of a technique for inflight jet noise simulation. I, II

    Science.gov (United States)

    Clapper, W. S.; Stringas, E. J.; Mani, R.; Banerian, G.

    1976-01-01

    Several possible noise simulation techniques were evaluated, including closed circuit wind tunnels, free jets, rocket sleds and high speed trains. The free jet technique was selected for demonstration and verification. The first paper describes the selection and development of the technique and presents results for simulation and in-flight tests of the Learjet, F106, and Bertin Aerotrain. The second presents a theoretical study relating the two sets of noise signatures. It is concluded that the free jet simulation technique provides a satisfactory assessment of in-flight noise.

  14. The Virtual Brain: a simulator of primate brain network dynamics.

    Science.gov (United States)

    Sanz Leon, Paula; Knock, Stuart A; Woodman, M Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor

    2013-01-01

    We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications.

  15. The Virtual Brain: a simulator of primate brain network dynamics

    Science.gov (United States)

    Sanz Leon, Paula; Knock, Stuart A.; Woodman, M. Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor

    2013-01-01

    We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications. PMID:23781198

  16. Classification of remotely sensed data using OCR-inspired neural network techniques. [Optical Character Recognition

    Science.gov (United States)

    Kiang, Richard K.

    1992-01-01

    Neural networks have been applied to classifications of remotely sensed data with some success. To improve the performance of this approach, an examination was made of how neural networks are applied to the optical character recognition (OCR) of handwritten digits and letters. A three-layer, feedforward network, along with techniques adopted from OCR, was used to classify Landsat-4 Thematic Mapper data. Good results were obtained. To overcome the difficulties that are characteristic of remote sensing applications and to attain significant improvements in classification accuracy, a special network architecture may be required.

  17. Future planning: default network activity couples with frontoparietal control network and reward-processing regions during process and outcome simulations.

    Science.gov (United States)

    Gerlach, Kathy D; Spreng, R Nathan; Madore, Kevin P; Schacter, Daniel L

    2014-12-01

    We spend much of our daily lives imagining how we can reach future goals and what will happen when we attain them. Despite the prevalence of such goal-directed simulations, neuroimaging studies on planning have mainly focused on executive processes in the frontal lobe. This experiment examined the neural basis of process simulations, during which participants imagined themselves going through steps toward attaining a goal, and outcome simulations, during which participants imagined events they associated with achieving a goal. In the scanner, participants engaged in these simulation tasks and an odd/even control task. We hypothesized that process simulations would recruit default and frontoparietal control network regions, and that outcome simulations, which allow us to anticipate the affective consequences of achieving goals, would recruit default and reward-processing regions. Our analysis of brain activity that covaried with process and outcome simulations confirmed these hypotheses. A functional connectivity analysis with posterior cingulate, dorsolateral prefrontal cortex and anterior inferior parietal lobule seeds showed that their activity was correlated during process simulations and associated with a distributed network of default and frontoparietal control network regions. During outcome simulations, medial prefrontal cortex and amygdala seeds covaried together and formed a functional network with default and reward-processing regions. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  18. Multivariate correlation analysis technique based on euclidean distance map for network traffic characterization

    NARCIS (Netherlands)

    Tan, Zhiyuan; Jamdagni, Aruna; He, Xiangjian; Nanda, Priyadarsi; Liu, Ren Ping; Qing, Sihan; Susilo, Willy; Wang, Guilin; Liu, Dongmei

    2011-01-01

    The quality of feature has significant impact on the performance of detection techniques used for Denial-of-Service (DoS) attack. The features that fail to provide accurate characterization for network traffic records make the techniques suffer from low accuracy in detection. Although researches

  19. Two dimensional numerical simulation of gas discharges: comparison between particle-in-cell and FCT techniques

    Energy Technology Data Exchange (ETDEWEB)

    Soria-Hoyo, C; Castellanos, A [Departamento de Electronica y Electromagnetismo, Facultad de Fisica, Universidad de Sevilla, Avda. Reina Mercedes s/n, 41012 Sevilla (Spain); Pontiga, F [Departamento de Fisica Aplicada II, EUAT, Universidad de Sevilla, Avda. Reina Mercedes s/n, 41012 Sevilla (Spain)], E-mail: cshoyo@us.es

    2008-10-21

    Two different numerical techniques have been applied to the numerical integration of equations modelling gas discharges: a finite-difference flux corrected transport (FD-FCT) technique and a particle-in-cell (PIC) technique. The PIC technique here implemented has been specifically designed for the simulation of 2D electrical discharges using cylindrical coordinates. The development and propagation of a streamer between two parallel electrodes has been used as a convenient test to compare the performance of both techniques. In particular, the phase velocity of the cathode directed streamer has been used to check the internal consistency of the numerical simulations. The results obtained from the two techniques are in reasonable agreement with each other, and both techniques have proved their ability to follow the high gradients of charge density and electric field present in this type of problems. Moreover, the streamer velocities predicted by the simulation are in accordance with the typical experimental values.

  20. Two dimensional numerical simulation of gas discharges: comparison between particle-in-cell and FCT techniques

    International Nuclear Information System (INIS)

    Soria-Hoyo, C; Castellanos, A; Pontiga, F

    2008-01-01

    Two different numerical techniques have been applied to the numerical integration of equations modelling gas discharges: a finite-difference flux corrected transport (FD-FCT) technique and a particle-in-cell (PIC) technique. The PIC technique here implemented has been specifically designed for the simulation of 2D electrical discharges using cylindrical coordinates. The development and propagation of a streamer between two parallel electrodes has been used as a convenient test to compare the performance of both techniques. In particular, the phase velocity of the cathode directed streamer has been used to check the internal consistency of the numerical simulations. The results obtained from the two techniques are in reasonable agreement with each other, and both techniques have proved their ability to follow the high gradients of charge density and electric field present in this type of problems. Moreover, the streamer velocities predicted by the simulation are in accordance with the typical experimental values.

  1. Simulation tools for industrial applications of phased array inspection techniques

    International Nuclear Information System (INIS)

    Mahaut, St.; Roy, O.; Chatillon, S.; Calmon, P.

    2001-01-01

    Ultrasonic phased arrays techniques have been developed at the French Atomic Energy Commission in order to improve defects characterization and adaptability to various inspection configuration (complex geometry specimen). Such transducers allow 'standard' techniques - adjustable beam-steering and focusing -, or more 'advanced' techniques - self-focusing on defects for instance -. To estimate the performances of those techniques, models have been developed, which allows to compute the ultrasonic field radiated by an arbitrary phased array transducer through any complex specimen, and to predict the ultrasonic response of various defects inspected with a known beam. Both modeling applications are gathered in the Civa software, dedicated to NDT expertise. The use of those complementary models allows to evaluate the ability of a phased array to steer and focus the ultrasonic beam, and therefore its relevancy to detect and characterize defects. These models are specifically developed to give accurate solutions to realistic inspection applications. This paper briefly describes the CIVA models, and presents some applications dedicated to the inspection of complex specimen containing various defects with a phased array used to steer and focus the beam. Defect detection and characterization performances are discussed for the various configurations. Some experimental validation of both models are also presented. (authors)

  2. Estimation of fracture aperture using simulation technique; Simulation wo mochiita fracture kaiko haba no suitei

    Energy Technology Data Exchange (ETDEWEB)

    Kikuchi, T [Geological Survey of Japan, Tsukuba (Japan); Abe, M [Tohoku University, Sendai (Japan). Faculty of Engineering

    1996-10-01

    Characteristics of amplitude variation around fractures have been investigated using simulation technique in the case changing the fracture aperture. Four models were used. The model-1 was a fracture model having a horizontal fracture at Z=0. For the model-2, the fracture was replaced by a group of small fractures. The model-3 had an extended borehole diameter at Z=0 in a shape of wedge. The model-4 had a low velocity layer at Z=0. The maximum amplitude was compared each other for each depth and for each model. For the model-1, the amplitude became larger at the depth of the fracture, and became smaller above the fracture. For the model-2, when the cross width D increased to 4 cm, the amplitude approached to that of the model-1. For the model-3 having extended borehole diameter, when the extension of borehole diameter ranged between 1 cm and 2 cm, the change of amplitude was hardly observed above and below the fracture. However, when the extension of borehole diameter was 4 cm, the amplitude became smaller above the extension part of borehole. 3 refs., 4 figs., 1 tab.

  3. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  4. Assessing suturing techniques using a virtual reality surgical simulator.

    Science.gov (United States)

    Kazemi, Hamed; Rappel, James K; Poston, Timothy; Hai Lim, Beng; Burdet, Etienne; Leong Teo, Chee

    2010-09-01

    Advantages of virtual-reality simulators surgical skill assessment and training include more training time, no risk to patient, repeatable difficulty level, reliable feedback, without the resource demands, and ethical issues of animal-based training. We tested this for a key subtask and showed a strong link between skill in the simulator and in reality. Suturing performance was assessed for four groups of participants, including experienced surgeons and naive subjects, on a custom-made virtual-reality simulator. Each subject tried the experiment 30 times using five different types of needles to perform a standardized suture placement task. Traditional metrics of performance as well as new metrics enabled by our system were proposed, and the data indicate difference between trained and untrained performance. In all traditional parameters such as time, number of attempts, and motion quantity, the medical surgeons outperformed the other three groups, though differences were not significant. However, motion smoothness, penetration and exit angles, tear size areas, and orientation change were statistically significant in the trained group when compared with untrained group. This suggests that these parameters can be used in virtual microsurgery training.

  5. Measurement and Simulation Techniques For Piezoresistive Microcantilever Biosensor Applications

    Directory of Open Access Journals (Sweden)

    Aan Febriansyah

    2012-12-01

    Full Text Available Applications of microcantilevers as biosensors have been explored by many researchers for the applications in medicine, biological, chemistry, and environmental monitoring. This research discusses a design of measurement method and simuations for piezoresistive microcantilever as a biosensor, which consist of designing Wheatstone bridge circuit as object detector, simulation of resonance frequency shift based on Euler Bernoulli Beam equation, and microcantilever vibration simulation using COMSOL Multiphysics 3.5. The piezoresistive microcantilever used here is Seiko Instrument Technology (Japan product with length of 110 ?m, width of 50 ?m, and thickness of 1 ?m. Microcantilever mass is 12.815 ng, including the mass receptor. The sample object in this research is bacteria EColi. One bacteria mass is assumed to 0.3 pg. Simulation results show that the mass of one bacterium will cause the deflection of 0,03053 nm and resonance frequency value of 118,90 kHz. Moreover, four bacterium will cause the deflection of 0,03054 nm and resonance frequency value of 118,68 kHz. These datas indicate that the increasing of the bacteria mass increases the deflection value and reduces the value of resonance frequency.

  6. Filament winding technique, experiment and simulation analysis on tubular structure

    Science.gov (United States)

    Quanjin, Ma; Rejab, M. R. M.; Kaige, Jiang; Idris, M. S.; Harith, M. N.

    2018-04-01

    Filament winding process has emerged as one of the potential composite fabrication processes with lower costs. Filament wound products involve classic axisymmetric parts (pipes, rings, driveshafts, high-pressure vessels and storage tanks), non-axisymmetric parts (prismatic nonround sections and pipe fittings). Based on the 3-axis filament winding machine has been designed with the inexpensive control system, it is completely necessary to make a relative comparison between experiment and simulation on tubular structure. In this technical paper, the aim of this paper is to perform a dry winding experiment using the 3-axis filament winding machine and simulate winding process on the tubular structure using CADWIND software with 30°, 45°, 60° winding angle. The main result indicates that the 3-axis filament winding machine can produce tubular structure with high winding pattern performance with different winding angle. This developed 3-axis winding machine still has weakness compared to CAWIND software simulation results with high axes winding machine about winding pattern, turnaround impact, process error, thickness, friction impact etc. In conclusion, the 3-axis filament winding machine improvements and recommendations come up with its comparison results, which can intuitively understand its limitations and characteristics.

  7. The computer simulation of the resonant network for the B-factory model power supply

    International Nuclear Information System (INIS)

    Zhou, W.; Endo, K.

    1993-07-01

    A high repetition model power supply and the resonant magnet network are simulated with the computer in order to check and improve the design of the power supply for the B-factory booster. We put our key point on a transient behavior of the power supply and the resonant magnet network. The results of the simulation are given. (author)

  8. ns-2 extension to simulate localization system in wireless sensor networks

    CSIR Research Space (South Africa)

    Abu-Mahfouz, Adnan M

    2011-09-01

    Full Text Available The ns-2 network simulator is one of the most widely used tools by researchers to investigate the characteristics of wireless sensor networks. Academic papers focus on results and rarely include details of how ns-2 simulations are implemented...

  9. Novel Machine Learning-Based Techniques for Efficient Resource Allocation in Next Generation Wireless Networks

    KAUST Repository

    AlQuerm, Ismail A.

    2018-02-21

    There is a large demand for applications of high data rates in wireless networks. These networks are becoming more complex and challenging to manage due to the heterogeneity of users and applications specifically in sophisticated networks such as the upcoming 5G. Energy efficiency in the future 5G network is one of the essential problems that needs consideration due to the interference and heterogeneity of the network topology. Smart resource allocation, environmental adaptivity, user-awareness and energy efficiency are essential features in the future networks. It is important to support these features at different networks topologies with various applications. Cognitive radio has been found to be the paradigm that is able to satisfy the above requirements. It is a very interdisciplinary topic that incorporates flexible system architectures, machine learning, context awareness and cooperative networking. Mitola’s vision about cognitive radio intended to build context-sensitive smart radios that are able to adapt to the wireless environment conditions while maintaining quality of service support for different applications. Artificial intelligence techniques including heuristics algorithms and machine learning are the shining tools that are employed to serve the new vision of cognitive radio. In addition, these techniques show a potential to be utilized in an efficient resource allocation for the upcoming 5G networks’ structures such as heterogeneous multi-tier 5G networks and heterogeneous cloud radio access networks due to their capability to allocate resources according to real-time data analytics. In this thesis, we study cognitive radio from a system point of view focusing closely on architectures, artificial intelligence techniques that can enable intelligent radio resource allocation and efficient radio parameters reconfiguration. We propose a modular cognitive resource management architecture, which facilitates a development of flexible control for

  10. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    Science.gov (United States)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  11. Modelisation et simulation d'un PON (Passive Optical Network) base ...

    African Journals Online (AJOL)

    English Title: Modeling and simulation of a PON (Passive Optical Network) Based on hybrid technology WDM/TDM. English Abstract. This development is part of dynamism of design for a model combining WDM and TDM multiplexing in the optical network of PON (Passive Optical Network) type, in order to satisfy the high bit ...

  12. Broadcast Expenses Controlling Techniques in Mobile Ad-hoc Networks: A Survey

    Directory of Open Access Journals (Sweden)

    Naeem Ahmad

    2016-07-01

    Full Text Available The blind flooding of query packets in route discovery more often characterizes the broadcast storm problem, exponentially increases energy consumption of intermediate nodes and congests the entire network. In such a congested network, the task of establishing the path between resources may become very complex and unwieldy. An extensive research work has been done in this area to improve the route discovery phase of routing protocols by reducing broadcast expenses. The purpose of this study is to provide a comparative analysis of existing broadcasting techniques for the route discovery phase, in order to bring about an efficient broadcasting technique for determining the route with minimum conveying nodes in ad-hoc networks. The study is designed to highlight the collective merits and demerits of such broadcasting techniques along with certain conclusions that would contribute to the choice of broadcasting techniques.

  13. Improved importance sampling technique for efficient simulation of digital communication systems

    Science.gov (United States)

    Lu, Dingqing; Yao, Kung

    1988-01-01

    A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.

  14. Modelling Altitude Information in Two-Dimensional Traffic Networks for Electric Mobility Simulation

    Directory of Open Access Journals (Sweden)

    Diogo Santos

    2016-06-01

    Full Text Available Elevation data is important for electric vehicle simulation. However, traffic simulators are often two-dimensional and do not offer the capability of modelling urban networks taking elevation into account. Specifically, SUMO - Simulation of Urban Mobility, a popular microscopic traffic simulator, relies on networks previously modelled with elevation data as to provide this information during simulations. This work tackles the problem of adding elevation data to urban network models - particularly for the case of the Porto urban network, in Portugal. With this goal in mind, a comparison between different altitude information retrieval approaches is made and a simple tool to annotate network models with altitude data is proposed. The work starts by describing the methodological approach followed during research and development, then describing and analysing its main findings. This description includes an in-depth explanation of the proposed tool. Lastly, this work reviews some related work to the subject.

  15. FEM Techniques for High Stress Detection in Accelerated Fatigue Simulation

    Science.gov (United States)

    Veltri, M.

    2016-09-01

    This work presents the theory and a numerical validation study in support to a novel method for a priori identification of fatigue critical regions, with the aim to accelerate durability design in large FEM problems. The investigation is placed in the context of modern full-body structural durability analysis, where a computationally intensive dynamic solution could be required to identify areas with potential for fatigue damage initiation. The early detection of fatigue critical areas can drive a simplification of the problem size, leading to sensible improvement in solution time and model handling while allowing processing of the critical areas in higher detail. The proposed technique is applied to a real life industrial case in a comparative assessment with established practices. Synthetic damage prediction quantification and visualization techniques allow for a quick and efficient comparison between methods, outlining potential application benefits and boundaries.

  16. Flash floods warning technique based on wireless communication networks data

    Science.gov (United States)

    David, Noam; Alpert, Pinhas; Messer, Hagit

    2010-05-01

    Flash floods can occur throughout or subsequent to rainfall events, particularly in cases where the precipitation is of high-intensity. Unfortunately, each year these floods cause severe property damage and heavy casualties. At present, there are no sufficient real time flash flood warning facilities found to cope with this phenomenon. Here we show the tremendous potential of flash floods advanced warning based on precipitation measurements of commercial microwave links. As was recently shown, wireless communication networks supply high resolution precipitation measurements at ground level while often being situated in flood prone areas, covering large parts of these hazardous regions. We present the flash flood warning potential of the wireless communication system for two different cases when floods occurred at the Judean desert and at the northern Negev in Israel. In both cases, an advanced warning regarding the hazard could have been announced based on this system. • This research was supported by THE ISRAEL SCIENCE FOUNDATION (grant No. 173/08). This work was also supported by a grant from the Yeshaya Horowitz Association, Jerusalem. Additional support was given by the PROCEMA-BMBF project and by the GLOWA-JR BMBF project.

  17. Simulation of land mine detection processes using nuclear techniques

    International Nuclear Information System (INIS)

    Aziz, M.

    2005-01-01

    A computer models were designed to study the processes of land mine detection using nuclear technique. Parameters that affect the detection were analyzed . Mines of different masses at different depths in the soil are considered using two types of sources , 252 C f and 14 MeV neutron source. The capability to differentiate between mines and other objects such as concrete , iron , wood , Aluminum ,water and polyethylene were analyzed and studied

  18. Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2015-01-07

    Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.

  19. Dielectric properties of proteins from simulations: tools and techniques

    Science.gov (United States)

    Simonson, Thomas; Perahia, David

    1995-09-01

    Tools and techniques to analyze the dielectric properties of proteins are described. Microscopic dielectric properties are determined by a susceptibility tensor of order 3 n, where n is the number of protein atoms. For perturbing charges not too close to the protein, the dielectric relaxation free energy is directly related to the dipole-dipole correlation matrix of the unperturbed protein, or equivalently to the covariance matrix of its atomic displacements. These are straightforward to obtain from existing molecular dynamics packages such as CHARMM or X- PLOR. Macroscopic dielectric properties can be derived from the dipolar fluctuations of the protein, by idealizing the protein as one or more spherical media. The dipolar fluctuations are again directly related to the covariance matrix of the atomic displacements. An interesting consequence is that the quasiharmonic approximation, which by definition exactly reproduces this covariance matrix, gives the protein dielectric constant exactly. Finally a technique is reviewed to obtain normal or quasinormal modes of vibration of symmetric protein assemblies. Using elementary group theory, and eliminating the high-frequency modes of vibration of each monomer, the limiting step in terms of memory and computation is finding the normal modes of a single monomer, with the other monomers held fixed. This technique was used to study the dielectric properties of the Tobacco Mosaic Virus protein disk.

  20. Purpose compliant visual simulation: towards effective and selective methods and techniques of visualisation and simulation

    NARCIS (Netherlands)

    Daru, R.; Venemans, P.

    1998-01-01

    Visualisation, simulation and communication were always intimately interconnected. Visualisations and simulations impersonate existing or virtual realities. Without those tools it is arduous to communicate mental depictions about virtual objects and events. A communication model is presented to

  1. Variance reduction techniques in the simulation of Markov processes

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space

  2. Integrating atomistic molecular dynamics simulations, experiments and network analysis to study protein dynamics: strength in unity

    Directory of Open Access Journals (Sweden)

    Elena ePapaleo

    2015-05-01

    Full Text Available In the last years, we have been observing remarkable improvements in the field of protein dynamics. Indeed, we can now study protein dynamics in atomistic details over several timescales with a rich portfolio of experimental and computational techniques. On one side, this provides us with the possibility to validate simulation methods and physical models against a broad range of experimental observables. On the other side, it also allows a complementary and comprehensive view on protein structure and dynamics. What is needed now is a better understanding of the link between the dynamic properties that we observe and the functional properties of these important cellular machines. To make progresses in this direction, we need to improve the physical models used to describe proteins and solvent in molecular dynamics, as well as to strengthen the integration of experiments and simulations to overcome their own limitations. Moreover, now that we have the means to study protein dynamics in great details, we need new tools to understand the information embedded in the protein ensembles and in their dynamic signature. With this aim in mind, we should enrich the current tools for analysis of biomolecular simulations with attention to the effects that can be propagated over long distances and are often associated to important biological functions. In this context, approaches inspired by network analysis can make an important contribution to the analysis of molecular dynamics simulations.

  3. Simulation of California's Major Reservoirs Outflow Using Data Mining Technique

    Science.gov (United States)

    Yang, T.; Gao, X.; Sorooshian, S.

    2014-12-01

    The reservoir's outflow is controlled by reservoir operators, which is different from the upstream inflow. The outflow is more important than the reservoir's inflow for the downstream water users. In order to simulate the complicated reservoir operation and extract the outflow decision making patterns for California's 12 major reservoirs, we build a data-driven, computer-based ("artificial intelligent") reservoir decision making tool, using decision regression and classification tree approach. This is a well-developed statistical and graphical modeling methodology in the field of data mining. A shuffled cross validation approach is also employed to extract the outflow decision making patterns and rules based on the selected decision variables (inflow amount, precipitation, timing, water type year etc.). To show the accuracy of the model, a verification study is carried out comparing the model-generated outflow decisions ("artificial intelligent" decisions) with that made by reservoir operators (human decisions). The simulation results show that the machine-generated outflow decisions are very similar to the real reservoir operators' decisions. This conclusion is based on statistical evaluations using the Nash-Sutcliffe test. The proposed model is able to detect the most influential variables and their weights when the reservoir operators make an outflow decision. While the proposed approach was firstly applied and tested on California's 12 major reservoirs, the method is universally adaptable to other reservoir systems.

  4. Environmental regulation in a network of simulated microbial ecosystems.

    Science.gov (United States)

    Williams, Hywel T P; Lenton, Timothy M

    2008-07-29

    The Earth possesses a number of regulatory feedback mechanisms involving life. In the absence of a population of competing biospheres, it has proved hard to find a robust evolutionary mechanism that would generate environmental regulation. It has been suggested that regulation must require altruistic environmental alterations by organisms and, therefore, would be evolutionarily unstable. This need not be the case if organisms alter the environment as a selectively neutral by-product of their metabolism, as in the majority of biogeochemical reactions, but a question then arises: Why should the combined by-product effects of the biota have a stabilizing, rather than destabilizing, influence on the environment? Under certain conditions, selection acting above the level of the individual can be an effective adaptive force. Here we present an evolutionary simulation model in which environmental regulation involving higher-level selection robustly emerges in a network of interconnected microbial ecosystems. Spatial structure creates conditions for a limited form of higher-level selection to act on the collective environment-altering properties of local communities. Local communities that improve their environmental conditions achieve larger populations and are better colonizers of available space, whereas local communities that degrade their environment shrink and become susceptible to invasion. The spread of environment-improving communities alters the global environment toward the optimal conditions for growth and tends to regulate against external perturbations. This work suggests a mechanism for environmental regulation that is consistent with evolutionary theory.

  5. CaloGAN: Simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks

    Science.gov (United States)

    Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin

    2018-01-01

    The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce CaloGAN, a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter and achieve speedup factors comparable to or better than existing full simulation techniques on CPU (100 ×-1000 × ) and even faster on GPU (up to ˜105× ). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons, and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future.

  6. BWR-plant simulator and its neural network companion with programming under mat lab environment

    International Nuclear Information System (INIS)

    Ghenniwa, Fatma Suleiman

    2008-01-01

    Stand alone nuclear power plant simulators, as well as building blocks based nuclear power simulator are available from different companies throughout the world. In this work, a review of such simulators has been explored for both types. Also a survey of the possible authoring tools for such simulators development has been performed. It is decided, in this research, to develop prototype simulator based on components building blocks. Further more, the authoring tool (Mat lab software) has been selected for programming. It has all the basic tools required for the simulator development similar to that developed by specialized companies for simulator like MMS, APROS and others. Components simulations, as well as integrated components for power plant simulation have been demonstrated. Preliminary neural network reactor model as part of a prepared neural network modules library has been used to demonstrate module order shuffling during simulation. The developed components library can be refined and extended for further development. (author)

  7. A Hybrid Communications Network Simulation-Independent Toolkit

    National Research Council Canada - National Science Library

    Dines, David M

    2008-01-01

    .... Evolving a grand design of the enabling network will require a flexible evaluation platform to try and select the right combination of network strategies and protocols in the realms of topology control and routing...

  8. 'BioNessie(G) - a grid enabled biochemical networks simulation environment

    OpenAIRE

    Liu, X; Jiang, J; Ajayi, O; Gu, X; Gilbert, D; Sinnott, R

    2008-01-01

    The simulation of biochemical networks provides insight and understanding about the underlying biochemical processes and pathways used by cells and organisms. BioNessie is a biochemical network simulator which has been developed at the University of Glasgow. This paper describes the simulator and focuses in particular on how it has been extended to benefit from a wide variety of high performance compute resources across the UK through Grid technologies to support larger scal...

  9. Social Networks and Smoking: Exploring the Effects of Influence and Smoker Popularity through Simulations

    Science.gov (United States)

    Schaefer, David R.; adams, jimi; Haas, Steven A.

    2015-01-01

    Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw upon recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention strategies targeted toward micro-level processes. Our approach begins by estimating a stochastic actor-based model using data from one school in the National Longitudinal Study of Adolescent Health. The model provides estimates of several factors predicting friendship ties and smoking behavior. We then use estimated model parameters to simulate the co-evolution of friendship and smoking behavior under potential intervention scenarios. Namely, we manipulate the strength of peer influence on smoking and the popularity of smokers relative to nonsmokers. We measure how these manipulations affect smoking prevalence, smoking initiation, and smoking cessation. Results indicate that both peer influence and smoking-based popularity affect smoking behavior, and that their joint effects are nonlinear. This study demonstrates how a simulation-based approach can be used to explore alternative scenarios that may be achievable through intervention efforts and offers new hypotheses about the association between friendship and smoking. PMID:24084397

  10. Social networks and smoking: exploring the effects of peer influence and smoker popularity through simulations.

    Science.gov (United States)

    Schaefer, David R; Adams, Jimi; Haas, Steven A

    2013-10-01

    Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw on recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention strategies targeted toward micro-level processes. Our approach begins by estimating a stochastic actor-based model using data from one school in the National Longitudinal Study of Adolescent Health. The model provides estimates of several factors predicting friendship ties and smoking behavior. We then use estimated model parameters to simulate the coevolution of friendship and smoking behavior under potential intervention scenarios. Namely, we manipulate the strength of peer influence on smoking and the popularity of smokers relative to nonsmokers. We measure how these manipulations affect smoking prevalence, smoking initiation, and smoking cessation. Results indicate that both peer influence and smoking-based popularity affect smoking behavior and that their joint effects are nonlinear. This study demonstrates how a simulation-based approach can be used to explore alternative scenarios that may be achievable through intervention efforts and offers new hypotheses about the association between friendship and smoking.

  11. Hybrid simulation techniques applied to the earth's bow shock

    Science.gov (United States)

    Winske, D.; Leroy, M. M.

    1985-01-01

    The application of a hybrid simulation model, in which the ions are treated as discrete particles and the electrons as a massless charge-neutralizing fluid, to the study of the earth's bow shock is discussed. The essentials of the numerical methods are described in detail; movement of the ions, solution of the electromagnetic fields and electron fluid equations, and imposition of appropriate boundary and initial conditions. Examples of results of calculations for perpendicular shocks are presented which demonstrate the need for a kinetic treatment of the ions to reproduce the correct ion dynamics and the corresponding shock structure. Results for oblique shocks are also presented to show how the magnetic field and ion motion differ from the perpendicular case.

  12. Drift simulation of MH370 debris using superensemble techniques

    Science.gov (United States)

    Jansen, Eric; Coppini, Giovanni; Pinardi, Nadia

    2016-07-01

    On 7 March 2014 (UTC), Malaysia Airlines flight 370 vanished without a trace. The aircraft is believed to have crashed in the southern Indian Ocean, but despite extensive search operations the location of the wreckage is still unknown. The first tangible evidence of the accident was discovered almost 17 months after the disappearance. On 29 July 2015, a small piece of the right wing of the aircraft was found washed up on the island of Réunion, approximately 4000 km from the assumed crash site. Since then a number of other parts have been found in Mozambique, South Africa and on Rodrigues Island. This paper presents a numerical simulation using high-resolution oceanographic and meteorological data to predict the movement of floating debris from the accident. Multiple model realisations are used with different starting locations and wind drag parameters. The model realisations are combined into a superensemble, adjusting the model weights to best represent the discovered debris. The superensemble is then used to predict the distribution of marine debris at various moments in time. This approach can be easily generalised to other drift simulations where observations are available to constrain unknown input parameters. The distribution at the time of the accident shows that the discovered debris most likely originated from the wide search area between 28 and 35° S. This partially overlaps with the current underwater search area, but extends further towards the north. Results at later times show that the most probable locations to discover washed-up debris are along the African east coast, especially in the area around Madagascar. The debris remaining at sea in 2016 is spread out over a wide area and its distribution changes only slowly.

  13. Compression and Combining Based on Channel Shortening and Rank Reduction Technique for Cooperative Wireless Sensor Networks

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-12-18

    This paper investigates and compares the performance of wireless sensor networks where sensors operate on the principles of cooperative communications. We consider a scenario where the source transmits signals to the destination with the help of L sensors. As the destination has the capacity of processing only U out of these L signals, the strongest U signals are selected while the remaining (L?U) signals are suppressed. A preprocessing block similar to channel-shortening is proposed in this contribution. However, this preprocessing block employs a rank-reduction technique instead of channel-shortening. By employing this preprocessing, we are able to decrease the computational complexity of the system without affecting the bit error rate (BER) performance. From our simulations, it can be shown that these schemes outperform the channel-shortening schemes in terms of computational complexity. In addition, the proposed schemes have a superior BER performance as compared to channel-shortening schemes when sensors employ fixed gain amplification. However, for sensors which employ variable gain amplification, a tradeoff exists in terms of BER performance between the channel-shortening and these schemes. These schemes outperform channel-shortening scheme for lower signal-to-noise ratio.

  14. Simulation Study on the Application of the Generalized Entropy Concept in Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Krzysztof Gajowniczek

    2018-04-01

    Full Text Available Artificial neural networks are currently one of the most commonly used classifiers and over the recent years they have been successfully used in many practical applications, including banking and finance, health and medicine, engineering and manufacturing. A large number of error functions have been proposed in the literature to achieve a better predictive power. However, only a few works employ Tsallis statistics, although the method itself has been successfully applied in other machine learning techniques. This paper undertakes the effort to examine the q -generalized function based on Tsallis statistics as an alternative error measure in neural networks. In order to validate different performance aspects of the proposed function and to enable identification of its strengths and weaknesses the extensive simulation was prepared based on the artificial benchmarking dataset. The results indicate that Tsallis entropy error function can be successfully introduced in the neural networks yielding satisfactory results and handling with class imbalance, noise in data or use of non-informative predictors.

  15. Modeling and Simulation of Handover Scheme in Integrated EPON-WiMAX Networks

    DEFF Research Database (Denmark)

    Yan, Ying; Dittmann, Lars

    2011-01-01

    In this paper, we tackle the seamless handover problem in integrated optical wireless networks. Our model applies for the convergence network of EPON and WiMAX and a mobilityaware signaling protocol is proposed. The proposed handover scheme, Integrated Mobility Management Scheme (IMMS), is assisted...... by enhancing the traditional MPCP signaling protocol, which cooperatively collects mobility information from the front-end wireless network and makes centralized bandwidth allocation decisions in the backhaul optical network. The integrated network architecture and the joint handover scheme are simulated using...... OPNET modeler. Results show validation of the protocol, i.e., integrated handover scheme gains better network performances....

  16. Greening the networks: a comparative analysis of different energy efficient techniques

    International Nuclear Information System (INIS)

    Arshad, M.J.; Saeed, S.S.

    2014-01-01

    From a room electric bulb to the gigantic backbone networks energy savings have now become a matter of considerable concern. Issues such as resource depletion, global warming, high energy consumptions and environmental threats gave birth to the idea of green networking. Serious efforts have been done in this regard on large scale in the ICT sector. In this work first we give an idea of how and why this modern technology emerged. We then formulate a precise definition of the term green technology. We then discuss some leading techniques which are promising to produce green-results when implemented on real time network systems. These technologies are viewed from different perspectives e.g. hardware implementations, software mechanisms and protocol changing etc. We then compare these techniques based on some pivotal points. The main conclusion is that a detailed comparison is needed for selecting a technology to implement on a network system. (author)

  17. Limits of validity of photon-in-cell simulation techniques

    International Nuclear Information System (INIS)

    Reitsma, A. J. W.; Jaroszynski, D. A.

    2008-01-01

    A comparison is made between two reduced models for studying laser propagation in underdense plasma; namely, photon kinetic theory and the slowly varying envelope approximation. Photon kinetic theory is a wave-kinetic description of the electromagnetic field where the motion of quasiparticles in photon coordinate-wave number phase space is described by the ray-tracing equations. Numerically, the photon kinetic theory is implemented with standard particle-in-cell techniques, which results in a so-called photon-in-cell code. For all the examples presented in this paper, the slowly varying envelope approximation is accurate and therefore discrepancies indicate the failure of photon kinetic approximation for these cases. Possible remedies for this failure are discussed at the end of the paper

  18. On a New Variance Reduction Technique: Neural Network Biasing-a Study of Two Test Cases with the Monte Carlo Code Tripoli4

    International Nuclear Information System (INIS)

    Dumonteil, E.

    2009-01-01

    Various variance-reduction techniques are used in Monte Carlo particle transport. Most of them rely either on a hypothesis made by the user (parameters of the exponential biasing, mesh and weight bounds for weight windows, etc.) or on a previous calculation of the system with, for example, a deterministic solver. This paper deals with a new acceleration technique, namely, auto-adaptative neural network biasing. Indeed, instead of using any a priori knowledge of the system, it is possible, at a given point in a simulation, to use the Monte Carlo histories previously simulated to train a neural network, which, in return, should be able to provide an estimation of the adjoint flux, used then for biasing the simulation. We will describe this method, detail its implementation in the Monte Carlo code Tripoli4, and discuss its results on two test cases. (author)

  19. Coherent network analysis technique for discriminating gravitational-wave bursts from instrumental noise

    International Nuclear Information System (INIS)

    Chatterji, Shourov; Lazzarini, Albert; Stein, Leo; Sutton, Patrick J.; Searle, Antony; Tinto, Massimo

    2006-01-01

    The sensitivity of current searches for gravitational-wave bursts is limited by non-Gaussian, nonstationary noise transients which are common in real detectors. Existing techniques for detecting gravitational-wave bursts assume the output of the detector network to be the sum of a stationary Gaussian noise process and a gravitational-wave signal. These techniques often fail in the presence of noise nonstationarities by incorrectly identifying such transients as possible gravitational-wave bursts. Furthermore, consistency tests currently used to try to eliminate these noise transients are not applicable to general networks of detectors with different orientations and noise spectra. In order to address this problem we introduce a fully coherent consistency test that is robust against noise nonstationarities and allows one to distinguish between gravitational-wave bursts and noise transients in general detector networks. This technique does not require any a priori knowledge of the putative burst waveform

  20. Visualization and simulation techniques for surgical simulators using actual patient's data.

    Science.gov (United States)

    Radetzky, Arne; Nürnberger, Andreas

    2002-11-01

    Because of the increasing complexity of surgical interventions research in surgical simulation became more and more important over the last years. However, the simulation of tissue deformation is still a challenging problem, mainly due to the short response times that are required for real-time interaction. The demands to hard and software are even larger if not only the modeled human anatomy is used but the anatomy of actual patients. This is required if the surgical simulator should be used as training medium for expert surgeons rather than students. In this article, suitable visualization and simulation methods for surgical simulation utilizing actual patient's datasets are described. Therefore, the advantages and disadvantages of direct and indirect volume rendering for the visualization are discussed and a neuro-fuzzy system is described, which can be used for the simulation of interactive tissue deformations. The neuro-fuzzy system makes it possible to define the deformation behavior based on a linguistic description of the tissue characteristics or to learn the dynamics by using measured data of real tissue. Furthermore, a simulator for minimally-invasive neurosurgical interventions is presented that utilizes the described visualization and simulation methods. The structure of the simulator is described in detail and the results of a system evaluation by an experienced neurosurgeon--a quantitative comparison between different methods of virtual endoscopy as well as a comparison between real brain images and virtual endoscopies--are given. The evaluation proved that the simulator provides a higher realism of the visualization and simulation then other currently available simulators. Copyright 2002 Elsevier Science B.V.

  1. IMPLEMENTATION OF IMPROVED NETWORK LIFETIME TECHNIQUE FOR WSN USING CLUSTER HEAD ROTATION AND SIMULTANEOUS RECEPTION

    Directory of Open Access Journals (Sweden)

    Arun Vasanaperumal

    2015-11-01

    Full Text Available There are number of potential applications of Wireless Sensor Networks (WSNs like wild habitat monitoring, forest fire detection, military surveillance etc. All these applications are constrained for power from a stand along battery power source. So it becomes of paramount importance to conserve the energy utilized from this power source. A lot of efforts have gone into this area recently and it remains as one of the hot research areas. In order to improve network lifetime and reduce average power consumption, this study proposes a novel cluster head selection algorithm. Clustering is the preferred architecture when the numbers of nodes are larger because it results in considerable power savings for large networks as compared to other ones like tree or star. Since majority of the applications generally involve more than 30 nodes, clustering has gained widespread importance and is most used network architecture. The optimum number of clusters is first selected based on the number of nodes in the network. When the network is in operation the cluster heads in a cluster are rotated periodically based on the proposed cluster head selection algorithm to increase the network lifetime. Throughout the network single-hop communication methodology is assumed. This work will serve as an encouragement for further advances in the low power techniques for implementing Wireless Sensor Networks (WSNs.

  2. Monte Carlo simulation techniques for predicting annual power production

    International Nuclear Information System (INIS)

    Cross, J.P.; Bulandr, P.J.

    1991-01-01

    As the owner and operator of a number of small to mid-sized hydroelectric sites, STS HydroPower has been faced with the need to accurately predict anticipated hydroelectric revenues over a period of years. The typical approach to this problem has been to look at each site from a mathematical deterministic perspective and evaluate the annual production from historic streamflows. Average annual production is simply taken to be the area under the flow duration curve defined by the operating and design characteristics of the selected turbines. Minimum annual production is taken to be a historic dry year scenario and maximum production is viewed as power generated under the most ideal of conditions. Such an approach creates two problems. First, in viewing the characteristics of a single site, it does not take into account the probability of such an event occurring. Second, in viewing all sites in a single organization's portfolio together, it does not reflect the varying flow conditions at the different sites. This paper attempts to address the first of these two concerns, that being the creation of a simulation model utilizing the Monte Carlo method at a single site. The result of the analysis is a picture of the production at the site that is both a better representation of anticipated conditions and defined probabilistically

  3. Experimental Evaluation of Simulation Abstractions for Wireless Sensor Network MAC Protocols

    NARCIS (Netherlands)

    Halkes, G.P.; Langendoen, K.G.

    2010-01-01

    The evaluation ofMAC protocols forWireless Sensor Networks (WSNs) is often performed through simulation. These simulations necessarily abstract away from reality inmany ways. However, the impact of these abstractions on the results of the simulations has received only limited attention. Moreover,

  4. How Crime Spreads Through Imitation in Social Networks: A Simulation Model

    Science.gov (United States)

    Punzo, Valentina

    In this chapter an agent-based model for investigating how crime spreads through social networks is presented. Some theoretical issues related to the sociological explanation of crime are tested through simulation. The agent-based simulation allows us to investigate the relative impact of some mechanisms of social influence on crime, within a set of controlled simulated experiments.

  5. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    Science.gov (United States)

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  6. Assessing Uncertainty in Deep Learning Techniques that Identify Atmospheric Rivers in Climate Simulations

    Science.gov (United States)

    Mahesh, A.; Mudigonda, M.; Kim, S. K.; Kashinath, K.; Kahou, S.; Michalski, V.; Williams, D. N.; Liu, Y.; Prabhat, M.; Loring, B.; O'Brien, T. A.; Collins, W. D.

    2017-12-01

    Atmospheric rivers (ARs) can be the difference between CA facing drought or hurricane-level storms. ARs are a form of extreme weather defined as long, narrow columns of moisture which transport water vapor outside the tropics. When they make landfall, they release the vapor as rain or snow. Convolutional neural networks (CNNs), a machine learning technique that uses filters to recognize features, are the leading computer vision mechanism for classifying multichannel images. CNNs have been proven to be effective in identifying extreme weather events in climate simulation output (Liu et. al. 2016, ABDA'16, http://bit.ly/2hlrFNV). Here, we compare three different CNN architectures, tuned with different hyperparameters and training schemes. We compare two-layer, three-layer, four-layer, and sixteen-layer CNNs' ability to recognize ARs in Community Atmospheric Model version 5 output, and we explore the ability of data augmentation and pre-trained models to increase the accuracy of the classifier. Because pre-training the model with regular images (i.e. benches, stoves, and dogs) yielded the highest accuracy rate, this strategy, also known as transfer learning, may be vital in future scientific CNNs, which likely will not have access to a large labelled training dataset. By choosing the most effective CNN architecture, climate scientists can build an accurate historical database of ARs, which can be used to develop a predictive understanding of these phenomena.

  7. Hybrid Network Simulation for the ATLAS Trigger and Data Acquisition (TDAQ) System

    CERN Document Server

    Bonaventura, Matias Alejandro; The ATLAS collaboration; Castro, Rodrigo Daniel; Foguelman, Daniel Jacob

    2015-01-01

    The poster shows the ongoing research in the ATLAS TDAQ group in collaboration with the University of Buenos Aires in the area of hybrid data network simulations. he Data Network and Processing Cluster filters data in real-time, achieving a rejection factor in the order of 40000x and has real-time latency constrains. The dataflow between the processing units (TPUs) and Readout System (ROS) presents a “TCP Incast”-type network pathology which TCP cannot handle it efficiently. A credits system is in place which limits rate of queries and reduces latency. This large computer network, and the complex dataflow has been modelled and simulated using a PowerDEVS, a DEVS-based simulator. The simulation has been validated and used to produce what-if scenarios in the real network. Network Simulation with Hybrid Flows: Speedups and accuracy, combined • For intensive network traffic, Discrete Event simulation models (packet-level granularity) soon becomes prohibitive: Too high computing demands. • Fluid Flow simul...

  8. A novel wavelet neural network based pathological stage detection technique for an oral precancerous condition

    Science.gov (United States)

    Paul, R R; Mukherjee, A; Dutta, P K; Banerjee, S; Pal, M; Chatterjee, J; Chaudhuri, K; Mukkerjee, K

    2005-01-01

    Aim: To describe a novel neural network based oral precancer (oral submucous fibrosis; OSF) stage detection method. Method: The wavelet coefficients of transmission electron microscopy images of collagen fibres from normal oral submucosa and OSF tissues were used to choose the feature vector which, in turn, was used to train the artificial neural network. Results: The trained network was able to classify normal and oral precancer stages (less advanced and advanced) after obtaining the image as an input. Conclusions: The results obtained from this proposed technique were promising and suggest that with further optimisation this method could be used to detect and stage OSF, and could be adapted for other conditions. PMID:16126873

  9. Optical transmission testing based on asynchronous sampling techniques: images analysis containing chromatic dispersion using convolutional neural network

    Science.gov (United States)

    Mrozek, T.; Perlicki, K.; Tajmajer, T.; Wasilewski, P.

    2017-08-01

    The article presents an image analysis method, obtained from an asynchronous delay tap sampling (ADTS) technique, which is used for simultaneous monitoring of various impairments occurring in the physical layer of the optical network. The ADTS method enables the visualization of the optical signal in the form of characteristics (so called phase portraits) that change their shape under the influence of impairments such as chromatic dispersion, polarization mode dispersion and ASE noise. Using this method, a simulation model was built with OptSim 4.0. After the simulation study, data were obtained in the form of images that were further analyzed using the convolutional neural network algorithm. The main goal of the study was to train a convolutional neural network to recognize the selected impairment (distortion); then to test its accuracy and estimate the impairment for the selected set of test images. The input data consisted of processed binary images in the form of two-dimensional matrices, with the position of the pixel. This article focuses only on the analysis of images containing chromatic dispersion.

  10. Shower library technique for fast simulation of showers in calorimeters of the H1 experiment

    International Nuclear Information System (INIS)

    Raičević, N.; Glazov, A.; Zhokin, A.

    2013-01-01

    Fast simulation of showers in calorimeters is very important for particle physics analysis since shower simulation typically takes significant amount of the simulation time. At the same time, a simulation must reproduce experimental data in the best possible way. In this paper, a fast simulation of showers in two calorimeters of the H1 experiment is presented. High speed and good quality of shower simulation is achieved by using a shower library technique in which the detector response is simulated using a collection of stored showers for different particle types and topologies. The library is created using the GEANT programme. The fast simulation based on shower library is compared to the data collected by the H1 experiment

  11. NCC simulation model. Phase 2: Simulating the operations of the Network Control Center and NCC message manual

    Science.gov (United States)

    Benjamin, Norman M.; Gill, Tepper; Charles, Mary

    1994-01-01

    The network control center (NCC) provides scheduling, monitoring, and control of services to the NASA space network. The space network provides tracking and data acquisition services to many low-earth orbiting spacecraft. This report describes the second phase in the development of simulation models for the FCC. Phase one concentrated on the computer systems and interconnecting network.Phase two focuses on the implementation of the network message dialogs and the resources controlled by the NCC. Performance measures were developed along with selected indicators of the NCC's operational effectiveness.The NCC performance indicators were defined in terms of the following: (1) transfer rate, (2) network delay, (3) channel establishment time, (4) line turn around time, (5) availability, (6) reliability, (7) accuracy, (8) maintainability, and (9) security. An NCC internal and external message manual is appended to this report.

  12. Application of artificial neural networks with backpropagation technique in the financial data

    Science.gov (United States)

    Jaiswal, Jitendra Kumar; Das, Raja

    2017-11-01

    The propensity of applying neural networks has been proliferated in multiple disciplines for research activities since the past recent decades because of its powerful control with regulatory parameters for pattern recognition and classification. It is also being widely applied for forecasting in the numerous divisions. Since financial data have been readily available due to the involvement of computers and computing systems in the stock market premises throughout the world, researchers have also developed numerous techniques and algorithms to analyze the data from this sector. In this paper we have applied neural network with backpropagation technique to find the data pattern from finance section and prediction for stock values as well.

  13. Applied techniques for high bandwidth data transfers across wide area networks

    International Nuclear Information System (INIS)

    Lee, J.; Gunter, D.; Tierney, B.; Allcock, B.; Bester, J.; Bresnahan, J.; Tuecke, S.

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. From their work developing a scalable distributed network cache, the authors have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). The authors discuss several hardware and software design techniques, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. The authors describe results from the Supercomputing 2000 conference

  14. Increasing data distribution in BitTorrent networks by using network coding techniques

    DEFF Research Database (Denmark)

    Braun, Patrik János; Sipos, Marton A.; Ekler, Péter

    2015-01-01

    Abstract: Peer-to-peer networks are well known for their benefits when used for sharing data among multiple users. One of the most common protocols for shared data distribution is BitTorrent. Despite its popularity, it has some inefficiencies that affect the speed of the content distribution. In ...

  15. Simulation techniques for spatially evolving instabilities in compressible flow over a flat plate

    NARCIS (Netherlands)

    Wasistho, B.; Geurts, Bernardus J.; Kuerten, Johannes G.M.

    1997-01-01

    In this paper we present numerical techniques suitable for a direct numerical simulation in the spatial setting. We demonstrate the application to the simulation of compressible flat plate flow instabilities. We compare second and fourth order accurate spatial discretization schemes in combination

  16. A Low Power 2.4 GHz CMOS Mixer Using Forward Body Bias Technique for Wireless Sensor Network

    Science.gov (United States)

    Yin, C. J.; Murad, S. A. Z.; Harun, A.; Ramli, M. M.; Zulkifli, T. Z. A.; Karim, J.

    2018-03-01

    Wireless sensor network (WSN) is a highly-demanded application since the evolution of wireless generation which is often used in recent communication technology. A radio frequency (RF) transceiver in WSN should have a low power consumption to support long operating times of mobile devices. A down-conversion mixer is responsible for frequency translation in a receiver. By operating a down-conversion mixer at a low supply voltage, the power consumed by WSN receiver can be greatly reduced. This paper presents a development of low power CMOS mixer using forward body bias technique for wireless sensor network. The proposed mixer is implemented using CMOS 0.13 μm Silterra technology. The forward body bias technique is adopted to obtain low power consumption. The simulation results indicate that a low power consumption of 0.91 mW is achieved at 1.6 V supply voltage. Moreover, the conversion gain (CG) of 21.83 dB, the noise figure (NF) of 16.51 dB and the input-referred third-order intercept point (IIP3) of 8.0 dB at 2.4 GHz are obtained. The proposed mixer is suitable for wireless sensor network.

  17. On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues.

    Science.gov (United States)

    Wang, Wei; Huang, Li; Liang, Xuedong

    2018-01-06

    This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks' statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies.

  18. CoSimulating Communication Networks and Electrical System for Performance Evaluation in Smart Grid

    Directory of Open Access Journals (Sweden)

    Hwantae Kim

    2018-01-01

    Full Text Available In smart grid research domain, simulation study is the first choice, since the analytic complexity is too high and constructing a testbed is very expensive. However, since communication infrastructure and the power grid are tightly coupled with each other in the smart grid, a well-defined combination of simulation tools for the systems is required for the simulation study. Therefore, in this paper, we propose a cosimulation work called OOCoSim, which consists of OPNET (network simulation tool and OpenDSS (power system simulation tool. By employing the simulation tool, an organic and dynamic cosimulation can be realized since both simulators operate on the same computing platform and provide external interfaces through which the simulation can be managed dynamically. In this paper, we provide OOCoSim design principles including a synchronization scheme and detailed descriptions of its implementation. To present the effectiveness of OOCoSim, we define a smart grid application model and conduct a simulation study to see the impact of the defined application and the underlying network system on the distribution system. The simulation results show that the proposed OOCoSim can successfully simulate the integrated scenario of the power and network systems and produce the accurate effects of the networked control in the smart grid.

  19. Real-time distributed simulation using the Modular Modeling System interfaced to a Bailey NETWORK 90 system

    International Nuclear Information System (INIS)

    Edwards, R.M.; Turso, J.A.; Garcia, H.E.; Ghie, M.H.; Dharap, S.; Lee, S.

    1991-01-01

    The Modular Modeling System was adapted for real-time simulation testing of diagnostic expert systems in 1987. The early approach utilized an available general purpose mainframe computer which operated the simulation and diagnostic program in the multitasking environment of the mainframe. That research program was subsequently expanded to intelligent distributed control applications incorporating microprocessor based controllers with the aid of an equipment grant from the National Science Foundation (NSF). The Bailey NETWORK 90 microprocessor-based control system, acquired with the NSF grant, has been operational since April of 1990 and has been interfaced to both VAX mainframe and PC simulations of power plant processes in order to test and demonstrate advanced control and diagnostic concepts. This paper discusses the variety of techniques that have been used and which are under development to interface simulations and other distributed control functions to the Penn State Bailey system

  20. Progress in the development of a video-based wind farm simulation technique

    OpenAIRE

    Robotham, AJ

    1992-01-01

    The progress in the development of a video-based wind farm simulation technique is reviewed. While improvements have been achieved in the quality of the composite picture created by combining computer generated animation sequences of wind turbines with background scenes of the wind farm site, extending the technique to include camera movements has proved troublesome.

  1. Parallel Reservoir Simulations with Sparse Grid Techniques and Applications to Wormhole Propagation

    KAUST Repository

    Wu, Yuanqing

    2015-09-08

    In this work, two topics of reservoir simulations are discussed. The first topic is the two-phase compositional flow simulation in hydrocarbon reservoir. The major obstacle that impedes the applicability of the simulation code is the long run time of the simulation procedure, and thus speeding up the simulation code is necessary. Two means are demonstrated to address the problem: parallelism in physical space and the application of sparse grids in parameter space. The parallel code can gain satisfactory scalability, and the sparse grids can remove the bottleneck of flash calculations. Instead of carrying out the flash calculation in each time step of the simulation, a sparse grid approximation of all possible results of the flash calculation is generated before the simulation. Then the constructed surrogate model is evaluated to approximate the flash calculation results during the simulation. The second topic is the wormhole propagation simulation in carbonate reservoir. In this work, different from the traditional simulation technique relying on the Darcy framework, we propose a new framework called Darcy-Brinkman-Forchheimer framework to simulate wormhole propagation. Furthermore, to process the large quantity of cells in the simulation grid and shorten the long simulation time of the traditional serial code, standard domain-based parallelism is employed, using the Hypre multigrid library. In addition to that, a new technique called “experimenting field approach” to set coefficients in the model equations is introduced. In the 2D dissolution experiments, different configurations of wormholes and a series of properties simulated by both frameworks are compared. We conclude that the numerical results of the DBF framework are more like wormholes and more stable than the Darcy framework, which is a demonstration of the advantages of the DBF framework. The scalability of the parallel code is also evaluated, and good scalability can be achieved. Finally, a mixed

  2. Simulation of sustainability aspects within the industrial environment and their implication on the simulation technique

    OpenAIRE

    Rabe, M.; Jäkel, F.-W.; Weinaug, H.

    2010-01-01

    Simulation is a broadly excepted analytic instrument and planning tool. Today, industrial simulation is mainly applied for engineering and physical purposes and covers a short time horizon compared to intergenerational justice. In parallel, sustainability is gaining more importance for the industrial planning because themes like global warming, child labour, and compliance with social and environmental standards have to be taken into account. Sustainability is characterized by comprehensively...

  3. Simulating large-scale spiking neuronal networks with NEST

    OpenAIRE

    Schücker, Jannis; Eppler, Jochen Martin

    2014-01-01

    The Neural Simulation Tool NEST [1, www.nest-simulator.org] is the simulator for spiking neural networkmodels of the HBP that focuses on the dynamics, size and structure of neural systems rather than on theexact morphology of individual neurons. Its simulation kernel is written in C++ and it runs on computinghardware ranging from simple laptops to clusters and supercomputers with thousands of processor cores.The development of NEST is coordinated by the NEST Initiative [www.nest-initiative.or...

  4. Enterprise Networks for Competences Exchange: A Simulation Model

    Science.gov (United States)

    Remondino, Marco; Pironti, Marco; Pisano, Paola

    A business process is a set of logically related tasks performed to achieve a defined business and related to improving organizational processes. Process innovation can happen at various levels: incrementally, redesign of existing processes, new processes. The knowledge behind process innovation can be shared, acquired, changed and increased by the enterprises inside a network. An enterprise can decide to exploit innovative processes it owns, thus potentially gaining competitive advantage, but risking, in turn, that other players could reach the same technological levels. Or it could decide to share it, in exchange for other competencies or money. These activities could be the basis for a network formation and/or impact the topology of an existing network. In this work an agent based model is introduced (E3), aiming to explore how a process innovation can facilitate network formation, affect its topology, induce new players to enter the market and spread onto the network by being shared or developed by new players.

  5. Evaluation and Simulation of Common Video Conference Traffics in Communication Networks

    Directory of Open Access Journals (Sweden)

    Farhad faghani

    2014-01-01

    Full Text Available Multimedia traffics are the basic traffics in data communication networks. Especially Video conferences are the most desirable traffics in huge networks(wired, wireless, …. Traffic modeling can help us to evaluate the real networks. So, in order to have good services in data communication networks which provide multimedia services, QoS will be very important .In this research we tried to have an exact traffic model design and simulation to overcome QoS challenges. Also, we predict bandwidth by Kalman filter in Ethernet networks.

  6. On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues

    Science.gov (United States)

    Wang, Wei; Huang, Li; Liang, Xuedong

    2018-01-01

    This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks’ statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies. PMID:29316614

  7. Computer Networks E-learning Based on Interactive Simulations and SCORM

    Directory of Open Access Journals (Sweden)

    Francisco Andrés Candelas

    2011-05-01

    Full Text Available This paper introduces a new set of compact interactive simulations developed for the constructive learning of computer networks concepts. These simulations, which compose a virtual laboratory implemented as portable Java applets, have been created by combining EJS (Easy Java Simulations with the KivaNS API. Furthermore, in this work, the skills and motivation level acquired by the students are evaluated and measured when these simulations are combined with Moodle and SCORM (Sharable Content Object Reference Model documents. This study has been developed to improve and stimulate the autonomous constructive learning in addition to provide timetable flexibility for a Computer Networks subject.

  8. A control technique for integration of DG units to the electrical networks

    DEFF Research Database (Denmark)

    Pouresmaeil, Edris; Miguel-Espinar, Carlos; Massot-Campos, Miquel

    2013-01-01

    This paper deals with a multiobjective control technique for integration of distributed generation (DG) resources to the electrical power network. The proposed strategy provides compensation for active, reactive, and harmonic load current components during connection of DG link to the grid...

  9. Mechanical properties of the collagen network in human articular cartilage as measured by osmotic stress technique

    NARCIS (Netherlands)

    Basser, P.J.; Schneiderman, R.; Bank, R.A.; Wachtel, E.; Maroudas, A.

    1998-01-01

    We have used an isotropic osmotic stress technique to assess the swelling pressures of human articular cartilage over a wide range of hydrations in order to determine from these measurements, for the first time, the tensile stress in the collagen network, P(c), as a function of hydration. Osmotic

  10. Overview of the neural network based technique for monitoring of road condition via reconstructed road profiles

    CSIR Research Space (South Africa)

    Ngwangwa, HM

    2008-07-01

    Full Text Available on the road and driver to assess the integrity of road and vehicle infrastructure. In this paper, vehicle vibration data are applied to an artificial neural network to reconstruct the corresponding road surface profiles. The results show that the technique...

  11. Fault diagnosis in nuclear power plants using an artificial neural network technique

    International Nuclear Information System (INIS)

    Chou, H.P.; Prock, J.; Bonfert, J.P.

    1993-01-01

    Application of artificial intelligence (AI) computational techniques, such as expert systems, fuzzy logic, and neural networks in diverse areas has taken place extensively. In the nuclear industry, the intended goal for these AI techniques is to improve power plant operational safety and reliability. As a computerized operator support tool, the artificial neural network (ANN) approach is an emerging technology that currently attracts a large amount of interest. The ability of ANNs to extract the input/output relation of a complicated process and the superior execution speed of a trained ANN motivated this study. The goal was to develop neural networks for sensor and process faults diagnosis with the potential of implementing as a component of a real-time operator support system LYDIA, early sensor and process fault detection and diagnosis

  12. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  13. An Extended N-Player Network Game and Simulation of Four Investment Strategies on a Complex Innovation Network.

    Directory of Open Access Journals (Sweden)

    Wen Zhou

    Full Text Available As computer science and complex network theory develop, non-cooperative games and their formation and application on complex networks have been important research topics. In the inter-firm innovation network, it is a typical game behavior for firms to invest in their alliance partners. Accounting for the possibility that firms can be resource constrained, this paper analyzes a coordination game using the Nash bargaining solution as allocation rules between firms in an inter-firm innovation network. We build an extended inter-firm n-player game based on nonidealized conditions, describe four investment strategies and simulate the strategies on an inter-firm innovation network in order to compare their performance. By analyzing the results of our experiments, we find that our proposed greedy strategy is the best-performing in most situations. We hope this study provides a theoretical insight into how firms make investment decisions.

  14. An Extended N-Player Network Game and Simulation of Four Investment Strategies on a Complex Innovation Network.

    Science.gov (United States)

    Zhou, Wen; Koptyug, Nikita; Ye, Shutao; Jia, Yifan; Lu, Xiaolong

    2016-01-01

    As computer science and complex network theory develop, non-cooperative games and their formation and application on complex networks have been important research topics. In the inter-firm innovation network, it is a typical game behavior for firms to invest in their alliance partners. Accounting for the possibility that firms can be resource constrained, this paper analyzes a coordination game using the Nash bargaining solution as allocation rules between firms in an inter-firm innovation network. We build an extended inter-firm n-player game based on nonidealized conditions, describe four investment strategies and simulate the strategies on an inter-firm innovation network in order to compare their performance. By analyzing the results of our experiments, we find that our proposed greedy strategy is the best-performing in most situations. We hope this study provides a theoretical insight into how firms make investment decisions.

  15. Simulation of Supply-Chain Networks: A Source of Innovation and Competitive Advantage for Small and Medium-Sized Enterprises

    Directory of Open Access Journals (Sweden)

    Giacomo Liotta

    2012-11-01

    Full Text Available On a daily basis, enterprises of all sizes cope with the turbulence and volatility of market demands, cost variability, and severe pressure from globally distributed competitors. Managing uncertainty about future demand requirements and volumes in supply-chain networks has become a priority. One of the ways to deal with uncertainty is the utilization of simulation techniques and tools, which provide greater predictability of decision-making outcomes. For example, simulation has been widely applied in decision-making processes related to global logistics and production networks at the strategic, tactical, and operational levels, where it is used to predict the impact of decisions before their implementation in complex and uncertain environments. Large enterprises are inclined to use simulation tools whereas small and medium-sized enterprises seem to underestimate its advantages. The objective of this article is to emphasize the relevance of simulation for the design and management of supply-chain networks from the perspective of small and medium-sized firms.

  16. Video-based peer feedback through social networking for robotic surgery simulation: a multicenter randomized controlled trial.

    Science.gov (United States)

    Carter, Stacey C; Chiang, Alexander; Shah, Galaxy; Kwan, Lorna; Montgomery, Jeffrey S; Karam, Amer; Tarnay, Christopher; Guru, Khurshid A; Hu, Jim C

    2015-05-01

    To examine the feasibility and outcomes of video-based peer feedback through social networking to facilitate robotic surgical skill acquisition. The acquisition of surgical skills may be challenging for novel techniques and/or those with prolonged learning curves. Randomized controlled trial involving 41 resident physicians performing the Tubes (Da Vinci Intuitive Surgical, Sunnyvale, CA) simulator exercise with versus without peer feedback of video-recorded performance through a social networking Web page. Data collected included simulator exercise score, time to completion, and comfort and satisfaction with robotic surgery simulation. There were no baseline differences between the intervention group (n = 20) and controls (n = 21). The intervention group showed improvement in mean scores from session 1 to sessions 2 and 3 (60.7 vs 75.5, P feedback subjects were more comfortable with robotic surgery than controls (90% vs 62%, P = 0.021) and expressed greater satisfaction with the learning experience (100% vs 67%, P = 0.014). Of the intervention subjects, 85% found that peer feedback was useful and 100% found it effective. Video-based peer feedback through social networking appears to be an effective paradigm for surgical education and accelerates the robotic surgery learning curve during simulation.

  17. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.

    Science.gov (United States)

    Vestergaard, Christian L; Génois, Mathieu

    2015-10-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.

  18. Development of a technique for level measurement in pressure vessels using thermal probes and artificial neural networks

    International Nuclear Information System (INIS)

    Torres, Walmir Maximo

    2008-01-01

    A technique for level measurement in pressure vessels was developed using thermal probes with internal cooling and artificial neural networks (ANN's). This new concept of thermal probes was experimentally tested in an experimental facility (BETSNI) with two test sections, ST1 and ST2. Two different thermal probes were designed and constructed: concentric tubes probe and U tube probe. A data acquisition system (DAS) was assembled to record the experimental data during the tests. Steady state and transient level tests were carried out and the experimental data obtained were used as learning and recall data sets in the ANN's program RETRO-05 that simulate a multilayer perceptron with backpropagation. The results of the analysis show that the technique can be applied for level measurements in pressure vessel. The technique is applied for a less input temperature data than the initially designed to the probes. The technique is robust and can be used in case of lack of some temperature data. Experimental data available in literature from electrically heated thermal probe were also used in the ANN's analysis producing good results. The results of the ANN's analysis show that the technique can be improved and applied to level measurements in pressure vessels. (author)

  19. Impact of stoichiometry representation on simulation of genotype-phenotype relationships in metabolic networks

    DEFF Research Database (Denmark)

    Brochado, Ana Rita; Andrejev, Sergej; Maranas, Costas D.

    2012-01-01

    the formulation of the desired objective functions, by casting objective functions using metabolite turnovers rather than fluxes. By simulating perturbed metabolic networks, we demonstrate that the use of stoichiometry representation independent algorithms is fundamental for unambiguously linking modeling results...

  20. FERN - a Java framework for stochastic simulation and evaluation of reaction networks.

    Science.gov (United States)

    Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf

    2008-08-29

    Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new

  1. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y W [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Zhang, L F [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Huang, J P [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China)

    2007-07-20

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property.

  2. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  3. Development of neural network simulating power distribution of a BWR fuel bundle

    International Nuclear Information System (INIS)

    Tanabe, A.; Yamamoto, T.; Shinfuku, K.; Nakamae, T.

    1992-01-01

    A neural network model is developed to simulate the precise nuclear physics analysis program code for quick scoping survey calculations. The relation between enrichment and local power distribution of BWR fuel bundles was learned using two layers neural network (ENET). A new model is to introduce burnable neutron absorber (Gadolinia), added to several fuel rods to decrease initial reactivity of fresh bundle. The 2nd stages three layers neural network (GNET) is added on the 1st stage network ENET. GNET studies the local distribution difference caused by Gadolinia. Using this method, it becomes possible to survey of the gradients of sigmoid functions and back propagation constants with reasonable time. Using 99 learning patterns of zero burnup, good error convergence curve is obtained after many trials. This neural network model is able to simulate no learned cases fairly as well as the learned cases. Computer time of this neural network model is about 100 times faster than a precise analysis model. (author)

  4. An introduction to network modeling and simulation for the practicing engineer

    CERN Document Server

    Burbank, Jack; Ward, Jon

    2011-01-01

    This book provides the practicing engineer with a concise listing of commercial and open-source modeling and simulation tools currently available including examples of implementing those tools for solving specific Modeling and Simulation examples. Instead of focusing on the underlying theory of Modeling and Simulation and fundamental building blocks for custom simulations, this book compares platforms used in practice, and gives rules enabling the practicing engineer to utilize available Modeling and Simulation tools. This book will contain insights regarding common pitfalls in network Modeling and Simulation and practical methods for working engineers.

  5. Discrimination of Cylinders with Different Wall Thicknesses using Neural Networks and Simulated Dolphin Sonar Signals

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whitlow; Larsen, Jan

    1999-01-01

    This paper describes a method integrating neural networks into a system for recognizing underwater objects. The system is based on a combination of simulated dolphin sonar signals, simulated auditory filters and artificial neural networks. The system is tested on a cylinder wall thickness...... difference experiment and demonstrates high accuracy for small wall thickness differences. Results from the experiment are compared with results obtained by a false killer whale (pseudorca crassidens)....

  6. Renewal of Road Networks Using Map-matching Technique of Trajectories

    Directory of Open Access Journals (Sweden)

    WU Tao

    2017-04-01

    Full Text Available The road network with complete and accurate information, as one of the key foundations of Smart City, bears significance in fields like urban planning, traffic managing and public traveling, et al. However, long manufacturing period of road network data, based on traditional surveying methods, often leaves it in an inconsistent state with the latest situation. Recently, positioning techniques ubiquitously used in mobile devices has been gradually coming into focus for domestic and overseas scholars. Currently, most of approaches, generating or updating road networks from mobile location information, are to compute with GPS trajectory data directly by various algorithms, which lead to expensive consumption of computational resources in case of mass GPS data covering large-scale areas. For this reason, we propose a spiral update strategy of road network data based on map-matching technology, which follows a “identify→analyze→extract→update” process. The main idea is to detect condemned road segments of existing road network data with the help of HMM for each trajectory input, as well as repair them, on the local scale, by extracting new road information from trajectory data.The proposed approach avoids computing on the entire dataset of trajectory data for road segments. Instead, it updates information of existing road network data by means of focalizing on the minimum range of potential condemned segments. We evaluated the performance of our proposals using GPS traces collected on taxies and OpenStreetMap(OSM road networks covering urban areas of Wuhan City.

  7. High capacity fiber optic sensor networks using hybrid multiplexing techniques and their applications

    Science.gov (United States)

    Sun, Qizhen; Li, Xiaolei; Zhang, Manliang; Liu, Qi; Liu, Hai; Liu, Deming

    2013-12-01

    Fiber optic sensor network is the development trend of fiber senor technologies and industries. In this paper, I will discuss recent research progress on high capacity fiber sensor networks with hybrid multiplexing techniques and their applications in the fields of security monitoring, environment monitoring, Smart eHome, etc. Firstly, I will present the architecture of hybrid multiplexing sensor passive optical network (HSPON), and the key technologies for integrated access and intelligent management of massive fiber sensor units. Two typical hybrid WDM/TDM fiber sensor networks for perimeter intrusion monitor and cultural relics security are introduced. Secondly, we propose the concept of "Microstructure-Optical X Domin Refecltor (M-OXDR)" for fiber sensor network expansion. By fabricating smart micro-structures with the ability of multidimensional encoded and low insertion loss along the fiber, the fiber sensor network of simple structure and huge capacity more than one thousand could be achieved. Assisted by the WDM/TDM and WDM/FDM decoding methods respectively, we built the verification systems for long-haul and real-time temperature sensing. Finally, I will show the high capacity and flexible fiber sensor network with IPv6 protocol based hybrid fiber/wireless access. By developing the fiber optic sensor with embedded IPv6 protocol conversion module and IPv6 router, huge amounts of fiber optic sensor nodes can be uniquely addressed. Meanwhile, various sensing information could be integrated and accessed to the Next Generation Internet.

  8. Simulation of noise-assisted transport via optical cavity networks

    International Nuclear Information System (INIS)

    Caruso, Filippo; Plenio, Martin B.; Spagnolo, Nicolo; Vitelli, Chiara; Sciarrino, Fabio

    2011-01-01

    Recently, the presence of noise has been found to play a key role in assisting the transport of energy and information in complex quantum networks and even in biomolecular systems. Here we propose an experimentally realizable optical network scheme for the demonstration of the basic mechanisms underlying noise-assisted transport. The proposed system consists of a network of coupled quantum-optical cavities, injected with a single photon, whose transmission efficiency can be measured. Introducing dephasing in the photon path, this system exhibits a characteristic enhancement of the transport efficiency that can be observed with presently available technology.

  9. Smart Grid: Network simulator for smart grid test-bed

    International Nuclear Information System (INIS)

    Lai, L C; Ong, H S; Che, Y X; Do, N Q; Ong, X J

    2013-01-01

    Smart Grid become more popular, a smaller scale of smart grid test-bed is set up at UNITEN to investigate the performance and to find out future enhancement of smart grid in Malaysia. The fundamental requirement in this project is design a network with low delay, no packet drop and with high data rate. Different type of traffic has its own characteristic and is suitable for different type of network and requirement. However no one understands the natural of traffic in smart grid. This paper presents the comparison between different types of traffic to find out the most suitable traffic for the optimal network performance.

  10. Quality comparison between DEF-10 digital image from simulation technique and Computed Tomography (CR) technique in industrial radiography

    International Nuclear Information System (INIS)

    Siti Nur Syatirah Ismail

    2012-01-01

    The study was conducted to make comparison of digital image quality of DEF-10 from the techniques of simulation and computed radiography (CR). The sample used is steel DEF-10 with thickness of 15.28 mm. In this study, the sample is exposed to radiation from X-ray machine (ISOVOLT Titan E) with certain parameters. The parameters used in this study such as current, volt, exposure time and distance are specified. The current and distance of 3 mA and 700 mm respectively are specified while the applied voltage varies at 140, 160, 180 and 200 kV. The exposure time is reduced at a rate of 0, 20, 40, 60 and 80 % for each sample exposure. Digital image of simulation produced from aRTist software whereas digital image of computed radiography produced from imaging plate. Therefore, both images were compared qualitatively (sensitivity) and quantitatively (Signal to-Noise Ratio; SNR, Basic Spatial Resolution; SRb and LOP size) using Isee software. Radiographic sensitivity is indicated by Image Quality Indicator (IQI) which is the ability of the CR system and aRTist software to identify IQI of wire type when the time exposure is reduced up to 80% according to exposure chart ( D7; ISOVOLT Titan E). The image of the thinnest wire diameter achieved by radiograph from simulation and CR are the wire numbered 7 rather than the wire numbered 8 required by the standard. In quantitative comparison, this study shows that the SNR values decreases with reducing exposure time. SRb values increases for simulation and decreases for CR when the exposure time decreases and the good image quality can be achieved at 80% reduced exposure time. The high SNR and SRb values produced good image quality in CR and simulation techniques respectively. (author)

  11. Low-mass molecular dynamics simulation: A simple and generic technique to enhance configurational sampling

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Yuan-Ping, E-mail: pang@mayo.edu

    2014-09-26

    Highlights: • Reducing atomic masses by 10-fold vastly improves sampling in MD simulations. • CLN025 folded in 4 of 10 × 0.5-μs MD simulations when masses were reduced by 10-fold. • CLN025 folded as early as 96.2 ns in 1 of the 4 simulations that captured folding. • CLN025 did not fold in 10 × 0.5-μs MD simulations when standard masses were used. • Low-mass MD simulation is a simple and generic sampling enhancement technique. - Abstract: CLN025 is one of the smallest fast-folding proteins. Until now it has not been reported that CLN025 can autonomously fold to its native conformation in a classical, all-atom, and isothermal–isobaric molecular dynamics (MD) simulation. This article reports the autonomous and repeated folding of CLN025 from a fully extended backbone conformation to its native conformation in explicit solvent in multiple 500-ns MD simulations at 277 K and 1 atm with the first folding event occurring as early as 66.1 ns. These simulations were accomplished by using AMBER forcefield derivatives with atomic masses reduced by 10-fold on Apple Mac Pros. By contrast, no folding event was observed when the simulations were repeated using the original AMBER forcefields of FF12SB and FF14SB. The results demonstrate that low-mass MD simulation is a simple and generic technique to enhance configurational sampling. This technique may propel autonomous folding of a wide range of miniature proteins in classical, all-atom, and isothermal–isobaric MD simulations performed on commodity computers—an important step forward in quantitative biology.

  12. Importance Sampling Simulation of Population Overflow in Two-node Tandem Networks

    NARCIS (Netherlands)

    Nicola, V.F.; Zaburnenko, T.S.; Baier, C; Chiola, G.; Smirni, E.

    2005-01-01

    In this paper we consider the application of importance sampling in simulations of Markovian tandem networks in order to estimate the probability of rare events, such as network population overflow. We propose a heuristic methodology to obtain a good approximation to the 'optimal' state-dependent

  13. Simulated epidemics in an empirical spatiotemporal network of 50,185 sexual contacts.

    Directory of Open Access Journals (Sweden)

    Luis E C Rocha

    2011-03-01

    Full Text Available Sexual contact patterns, both in their temporal and network structure, can influence the spread of sexually transmitted infections (STI. Most previous literature has focused on effects of network topology; few studies have addressed the role of temporal structure. We simulate disease spread using SI and SIR models on an empirical temporal network of sexual contacts in high-end prostitution. We compare these results with several other approaches, including randomization of the data, classic mean-field approaches, and static network simulations. We observe that epidemic dynamics in this contact structure have well-defined, rather high epidemic thresholds. Temporal effects create a broad distribution of outbreak sizes, even if the per-contact transmission probability is taken to its hypothetical maximum of 100%. In general, we conclude that the temporal correlations of our network accelerate outbreaks, especially in the early phase of the epidemics, while the network topology (apart from the contact-rate distribution slows them down. We find that the temporal correlations of sexual contacts can significantly change simulated outbreaks in a large empirical sexual network. Thus, temporal structures are needed alongside network topology to fully understand the spread of STIs. On a side note, our simulations further suggest that the specific type of commercial sex we investigate is not a reservoir of major importance for HIV.

  14. Transport link scanner: simulating geographic transport network expansion through individual investments

    NARCIS (Netherlands)

    Koopmans, C.C.; Jacobs, C.G.W.

    2016-01-01

    This paper introduces a GIS-based model that simulates the geographic expansion of transport networks by several decision-makers with varying objectives. The model progressively adds extensions to a growing network by choosing the most attractive investments from a limited choice set. Attractiveness

  15. Evaluation Technique of Chloride Penetration Using Apparent Diffusion Coefficient and Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Yun-Yong Kim

    2014-01-01

    Full Text Available Diffusion coefficient from chloride migration test is currently used; however this cannot provide a conventional solution like total chloride contents since it depicts only ion migration velocity in electrical field. This paper proposes a simple analysis technique for chloride behavior using apparent diffusion coefficient from neural network algorithm with time-dependent diffusion phenomena. For this work, thirty mix proportions of high performance concrete are prepared and their diffusion coefficients are obtained after long term-NaCl submerged test. Considering time-dependent diffusion coefficient based on Fick’s 2nd Law and NNA (neural network algorithm, analysis technique for chloride penetration is proposed. The applicability of the proposed technique is verified through the results from accelerated test, long term submerged test, and field investigation results.

  16. Smart techniques in the dynamic spectrum alocation for cognitive wireless networks

    Directory of Open Access Journals (Sweden)

    Camila Salgado

    2016-09-01

    Full Text Available Objective: The objective of this work is to study the applications of different techniques of artificial intelligence and autonomous learning in the dynamic allocation of spectrum for cognitive wireless networks, especially the distributed ones. Method: The development of this work was done through the study and analysis of some of the most relevant publications in current literature through the search in indexed international journals in ISI and Scopus. Results: the most relevant techniques of artificial intelligence and autonomous learning were determined. Also, the ones with more applicability in the allocation of spectrum for cognitive wireless networks were determined, too. . Conclusions: The implementation of a technique, or set of them, depends on the needs in signal processing, compensation in response times, sample availability, storage capacity, learning ability and robustness, among others.

  17. Applied techniques for high bandwidth data transfers across wide area networks

    International Nuclear Information System (INIS)

    Lee, Jason; Gunter, Dan; Tierney, Brian; Allcock, Bill; Bester, Joe; Bresnahan, John; Tuecke, Steve

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. Ensuring that the data is there in time for the computation in today's Internet is a massive problem. From our work developing a scalable distributed network cache, we have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). In this paper, we discuss several hardware and software design techniques and issues, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. We also describe results from two applications using these techniques, which were obtained at the Supercomputing 2000 conference

  18. Simulation into Reality: Some Effects of Simulation Techniques on Organizational Communication Students.

    Science.gov (United States)

    Allen, Richard K.

    In an attempt to discover improved classroom teaching methods, a class was turned into a business organization as a way of bringing life to the previously covered lectures and textual materials. The simulated games were an attempt to get people to work toward a common goal with all of the power plays, secret meetings, brainstorming, anger, and…

  19. Developing Simulated Cyber Attack Scenarios Against Virtualized Adversary Networks

    Science.gov (United States)

    2017-03-01

    enclave, as shown in Figure 11, is a common design for many secure networks. Different variations of a cyber-attack scenario can be rehearsed based...achieved a greater degree of success against multiple variations of an enemy network. E. ATTACK TYPES A primary goal of this thesis is to define and...2013. [33] R. Goldberg , “Architectural principles for virtual computer systems,” Ph.D. dissertation, Dept. of Comp. Sci., Harvard Univ., Cambridge

  20. Computational Intelligence based techniques for islanding detection of distributed generation in distribution network: A review

    International Nuclear Information System (INIS)

    Laghari, J.A.; Mokhlis, H.; Karimi, M.; Bakar, A.H.A.; Mohamad, Hasmaini

    2014-01-01

    Highlights: • Unintentional and intentional islanding, their causes, and solutions are presented. • Remote, passive, active and hybrid islanding detection techniques are discussed. • The limitation of these techniques in accurately detect islanding are discussed. • Computational intelligence techniques ability in detecting islanding is discussed. • Review of ANN, fuzzy logic control, ANFIS, Decision tree techniques is provided. - Abstract: Accurate and fast islanding detection of distributed generation is highly important for its successful operation in distribution networks. Up to now, various islanding detection technique based on communication, passive, active and hybrid methods have been proposed. However, each technique suffers from certain demerits that cause inaccuracies in islanding detection. Computational intelligence based techniques, due to their robustness and flexibility in dealing with complex nonlinear systems, is an option that might solve this problem. This paper aims to provide a comprehensive review of computational intelligence based techniques applied for islanding detection of distributed generation. Moreover, the paper compares the accuracies of computational intelligence based techniques over existing techniques to provide a handful of information for industries and utility researchers to determine the best method for their respective system

  1. COMPUTER DYNAMICS SIMULATION OF DRUG DEPENDENCE THROUGH ARTIFICIAL NEURONAL NETWORK: PEDAGOGICAL AND CLINICAL IMPLICATIONS

    Directory of Open Access Journals (Sweden)

    G. SANTOS

    2008-05-01

    Full Text Available To develop and to evaluate the efficiency of a software able to simulate a virtual patient at different stages of addition was the main goal and challenge of this work. We developed the software in Borland™ Delphi  5®  programming language. Techniques of artificial intelligence, neuronal networks and expert systems, were responsible for modeling the neurobiological structures and mechanisms of the interaction with the drugs used. Dynamical simulation and  hypermedia were designed to increase the software’s interactivity which was able to show graphical information from virtual instrumentation and from realistic functional magnetic resonance imaging display. Early, the program was designed to be used by undergraduate students to improve their neurophysiologic learn, based not only in an interaction of membrane receptors with drugs, but in such a large behavioral simulation. The experimental manipulation of the software was accomplished by: i creating a virtual patient from a normal mood to a behavioral addiction, increasing gradatively: alcohol, opiate or cocaine doses. ii designing an approach to treat the patient, to get total or partial remission of behavioral disorder by combining psychopharmacology and psychotherapy. Integration of dynamic simulation with hypermedia and artificial intelligence has been able to point out behavioral details as tolerance, sensitization and level of addiction to drugs of abuse and so on, turned into a potentially useful tool in the development of teaching activities on several ways, such as education as well clinical skills, in which it could assist patients, families and health care to improve and test their knowledge and skills about different faces supported by drugs dependency. Those features are currently under investigation.

  2. Simulation of the fissureless technique for thoracoscopic segmentectomy using rapid prototyping.

    Science.gov (United States)

    Akiba, Tadashi; Nakada, Takeo; Inagaki, Takuya

    2015-01-01

    The fissureless lobectomy or anterior fissureless technique is a novel surgical technique, which avoids dissection of the lung parenchyma over the pulmonary artery during lobectomy by open thoracotomy approach or direct vision thoracoscopic surgery. This technique is indicated for fused lobes. We present two cases where thoracoscopic pulmonary segmentectomy was performed using the fissureless technique simulated by three-dimensional (3D) pulmonary models. The 3D model and rapid prototyping provided an accurate anatomical understanding of the operative field in both cases. We believe that the construction of these models is useful for thoracoscopic and other complicated surgeries of the chest.

  3. Simulation of the Fissureless Technique for Thoracoscopic Segmentectomy Using Rapid Prototyping

    Science.gov (United States)

    Nakada, Takeo; Inagaki, Takuya

    2014-01-01

    The fissureless lobectomy or anterior fissureless technique is a novel surgical technique, which avoids dissection of the lung parenchyma over the pulmonary artery during lobectomy by open thoracotomy approach or direct vision thoracoscopic surgery. This technique is indicated for fused lobes. We present two cases where thoracoscopic pulmonary segmentectomy was performed using the fissureless technique simulated by three-dimensional (3D) pulmonary models. The 3D model and rapid prototyping provided an accurate anatomical understanding of the operative field in both cases. We believe that the construction of these models is useful for thoracoscopic and other complicated surgeries of the chest. PMID:24633132

  4. Performance improvement of optical CDMA networks with stochastic artificial bee colony optimization technique

    Science.gov (United States)

    Panda, Satyasen

    2018-05-01

    This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.

  5. Application of signal processing techniques for islanding detection of distributed generation in distribution network: A review

    International Nuclear Information System (INIS)

    Raza, Safdar; Mokhlis, Hazlie; Arof, Hamzah; Laghari, J.A.; Wang, Li

    2015-01-01

    Highlights: • Pros & cons of conventional islanding detection techniques (IDTs) are discussed. • Signal processing techniques (SPTs) ability in detecting islanding is discussed. • SPTs ability in improving performance of passive techniques are discussed. • Fourier, s-transform, wavelet, HHT & tt-transform based IDTs are reviewed. • Intelligent classifiers (ANN, ANFIS, Fuzzy, SVM) application in SPT are discussed. - Abstract: High penetration of distributed generation resources (DGR) in distribution network provides many benefits in terms of high power quality, efficiency, and low carbon emissions in power system. However, efficient islanding detection and immediate disconnection of DGR is critical in order to avoid equipment damage, grid protection interference, and personnel safety hazards. Islanding detection techniques are mainly classified into remote, passive, active, and hybrid techniques. From these, passive techniques are more advantageous due to lower power quality degradation, lower cost, and widespread usage by power utilities. However, the main limitations of these techniques are that they possess a large non detection zones and require threshold setting. Various signal processing techniques and intelligent classifiers have been used to overcome the limitations of passive islanding. Signal processing techniques, in particular, are adopted due to their versatility, stability, cost effectiveness, and ease of modification. This paper presents a comprehensive overview of signal processing techniques used to improve common passive islanding detection techniques. A performance comparison between the signal processing based islanding detection techniques with existing techniques are also provided. Finally, this paper outlines the relative advantages and limitations of the signal processing techniques in order to provide basic guidelines for researchers and field engineers in determining the best method for their system

  6. Simulation and Statistical Inference of Stochastic Reaction Networks with Applications to Epidemic Models

    KAUST Repository

    Moraes, Alvaro

    2015-01-01

    Epidemics have shaped, sometimes more than wars and natural disasters, demo- graphic aspects of human populations around the world, their health habits and their economies. Ebola and the Middle East Respiratory Syndrome (MERS) are clear and current examples of potential hazards at planetary scale. During the spread of an epidemic disease, there are phenomena, like the sudden extinction of the epidemic, that can not be captured by deterministic models. As a consequence, stochastic models have been proposed during the last decades. A typical forward problem in the stochastic setting could be the approximation of the expected number of infected individuals found in one month from now. On the other hand, a typical inverse problem could be, given a discretely observed set of epidemiological data, infer the transmission rate of the epidemic or its basic reproduction number. Markovian epidemic models are stochastic models belonging to a wide class of pure jump processes known as Stochastic Reaction Networks (SRNs), that are intended to describe the time evolution of interacting particle systems where one particle interacts with the others through a finite set of reaction channels. SRNs have been mainly developed to model biochemical reactions but they also have applications in neural networks, virus kinetics, and dynamics of social networks, among others. 4 This PhD thesis is focused on novel fast simulation algorithms and statistical inference methods for SRNs. Our novel Multi-level Monte Carlo (MLMC) hybrid simulation algorithms provide accurate estimates of expected values of a given observable of SRNs at a prescribed final time. They are designed to control the global approximation error up to a user-selected accuracy and up to a certain confidence level, and with near optimal computational work. We also present novel dual-weighted residual expansions for fast estimation of weak and strong errors arising from the MLMC methodology. Regarding the statistical inference

  7. Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network.

    Science.gov (United States)

    Groth, Detlef

    2017-04-01

    Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later

  8. Fuzzy-Based Adaptive Hybrid Burst Assembly Technique for Optical Burst Switched Networks

    Directory of Open Access Journals (Sweden)

    Abubakar Muhammad Umaru

    2014-01-01

    Full Text Available The optical burst switching (OBS paradigm is perceived as an intermediate switching technology for future all-optical networks. Burst assembly that is the first process in OBS is the focus of this paper. In this paper, an intelligent hybrid burst assembly algorithm that is based on fuzzy logic is proposed. The new algorithm is evaluated against the traditional hybrid burst assembly algorithm and the fuzzy adaptive threshold (FAT burst assembly algorithm via simulation. Simulation results show that the proposed algorithm outperforms the hybrid and the FAT algorithms in terms of burst end-to-end delay, packet end-to-end delay, and packet loss ratio.

  9. A 3D technique for simulation of irregular electron treatment fields using a digital camera

    International Nuclear Information System (INIS)

    Bassalow, Roustem; Sidhu, Narinder P.

    2003-01-01

    Cerrobend inserts, which define electron field apertures, are manufactured at our institution using perspex templates. Contours are reproduced manually on these templates at the simulator from the field outlines drawn on the skin or mask of a patient. A previously reported technique for simulation of electron treatment fields uses a digital camera to eliminate the need for such templates. However, avoidance of the image distortions introduced by non-flat surfaces on which the electron field outlines were drawn could only be achieved by limiting the application of this technique to surfaces which were flat or near flat. We present a technique that employs a digital camera and allows simulation of electron treatment fields contoured on an anatomical surface of an arbitrary three-dimensional (3D) shape, such as that of the neck, extremities, face, or breast. The procedure is fast, accurate, and easy to perform

  10. An overview of uncertainty quantification techniques with application to oceanic and oil-spill simulations

    KAUST Repository

    Iskandarani, Mohamed; Wang, Shitao; Srinivasan, Ashwanth; Carlisle Thacker, W.; Winokur, Justin; Knio, Omar

    2016-01-01

    We give an overview of four different ensemble-based techniques for uncertainty quantification and illustrate their application in the context of oil plume simulations. These techniques share the common paradigm of constructing a model proxy that efficiently captures the functional dependence of the model output on uncertain model inputs. This proxy is then used to explore the space of uncertain inputs using a large number of samples, so that reliable estimates of the model's output statistics can be calculated. Three of these techniques use polynomial chaos (PC) expansions to construct the model proxy, but they differ in their approach to determining the expansions' coefficients; the fourth technique uses Gaussian Process Regression (GPR). An integral plume model for simulating the Deepwater Horizon oil-gas blowout provides examples for illustrating the different techniques. A Monte Carlo ensemble of 50,000 model simulations is used for gauging the performance of the different proxies. The examples illustrate how regression-based techniques can outperform projection-based techniques when the model output is noisy. They also demonstrate that robust uncertainty analysis can be performed at a fraction of the cost of the Monte Carlo calculation.

  11. An overview of uncertainty quantification techniques with application to oceanic and oil-spill simulations

    KAUST Repository

    Iskandarani, Mohamed

    2016-04-22

    We give an overview of four different ensemble-based techniques for uncertainty quantification and illustrate their application in the context of oil plume simulations. These techniques share the common paradigm of constructing a model proxy that efficiently captures the functional dependence of the model output on uncertain model inputs. This proxy is then used to explore the space of uncertain inputs using a large number of samples, so that reliable estimates of the model\\'s output statistics can be calculated. Three of these techniques use polynomial chaos (PC) expansions to construct the model proxy, but they differ in their approach to determining the expansions\\' coefficients; the fourth technique uses Gaussian Process Regression (GPR). An integral plume model for simulating the Deepwater Horizon oil-gas blowout provides examples for illustrating the different techniques. A Monte Carlo ensemble of 50,000 model simulations is used for gauging the performance of the different proxies. The examples illustrate how regression-based techniques can outperform projection-based techniques when the model output is noisy. They also demonstrate that robust uncertainty analysis can be performed at a fraction of the cost of the Monte Carlo calculation.

  12. Comparison of Available Bandwidth Estimation Techniques in Packet-Switched Mobile Networks

    DEFF Research Database (Denmark)

    López Villa, Dimas; Ubeda Castellanos, Carlos; Teyeb, Oumer Mohammed

    2006-01-01

    The relative contribution of the transport network towards the per-user capacity in mobile telecommunication systems is becoming very important due to the ever increasing air-interface data rates. Thus, resource management procedures such as admission, load and handover control can make use...... of information regarding the available bandwidth in the transport network, as it could end up being the bottleneck rather than the air interface. This paper provides a comparative study of three well known available bandwidth estimation techniques, i.e. TOPP, SLoPS and pathChirp, taking into account...

  13. Development of neural network techniques for the analysis of JET ECE data

    International Nuclear Information System (INIS)

    Bartlett, D.V.; Bishop, C.M.

    1993-01-01

    This paper reports on a project currently in progress to develop neutral network techniques for the conversion of JET ECE spectra to electron temperature profiles. The aim is to obtain profiles with reduced measurement uncertainties by incorporating data from the LIDAR Thomson scattering diagnostic in the analysis, while retaining the faster time resolution of the ECE measurements. The properties of neural networks are briefly reviewed, and the reasons for using them in this application are explained. Some preliminary results are presented and the direction of future work is outlined. (orig.)

  14. Wireless network development for the automatic registration of parameters in laboratories of nuclear analytical techniques

    International Nuclear Information System (INIS)

    Tincopa, Jean Pierre; Baltuano, Oscar; Bedregal, Patricia

    2015-01-01

    This paper presents in detail the development of a low-cost wireless network for automatic recording of temperature and relative humidity parameters in the laboratory of nuclear analytical techniques. This prototype has a DHT22 sensor which gives us both parameters with high precision and are automatically read and displayed by a ATmega328P microcontroller. This data is then transmitted through transceivers Xbee Pro S2B forming a mesh network for real time storage using an RTC (Real Time Clock). We present the experimental results obtained in its implementation. (author)

  15. Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users

    Directory of Open Access Journals (Sweden)

    Vladislav D. Veksler

    2018-05-01

    Full Text Available Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior via techniques such as model tracing and dynamic parameter fitting.

  16. Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users

    Science.gov (United States)

    Veksler, Vladislav D.; Buchler, Norbou; Hoffman, Blaine E.; Cassenti, Daniel N.; Sample, Char; Sugrim, Shridat

    2018-01-01

    Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting. PMID:29867661

  17. Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users.

    Science.gov (United States)

    Veksler, Vladislav D; Buchler, Norbou; Hoffman, Blaine E; Cassenti, Daniel N; Sample, Char; Sugrim, Shridat

    2018-01-01

    Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting.

  18. An analytical simulation technique for cone-beam CT and pinhole SPECT

    International Nuclear Information System (INIS)

    Zhang Xuezhu; Qi Yujin

    2011-01-01

    This study was aimed at developing an efficient simulation technique with an ordinary PC. The work involved derivation of mathematical operators, analytic phantom generations, and effective analytical projectors developing for cone-beam CT and pinhole SPECT imaging. The computer simulations based on the analytical projectors were developed by ray-tracing method for cone-beam CT and voxel-driven method for pinhole SPECT of degrading blurring. The 3D Shepp-Logan, Jaszczak and Defrise phantoms were used for simulation evaluations and image reconstructions. The reconstructed phantom images were of good accuracy with the phantoms. The results showed that the analytical simulation technique is an efficient tool for studying cone-beam CT and pinhole SPECT imaging. (authors)

  19. Pithy Review on Routing Protocols in Wireless Sensor Networks and Least Routing Time Opportunistic Technique in WSN

    Science.gov (United States)

    Salman Arafath, Mohammed; Rahman Khan, Khaleel Ur; Sunitha, K. V. N.

    2018-01-01

    Nowadays due to most of the telecommunication standard development organizations focusing on using device-to-device communication so that they can provide proximity-based services and add-on services on top of the available cellular infrastructure. An Oppnets and wireless sensor network play a prominent role here. Routing in these networks plays a significant role in fields such as traffic management, packet delivery etc. Routing is a prodigious research area with diverse unresolved issues. This paper firstly focuses on the importance of Opportunistic routing and its concept then focus is shifted to prime aspect i.e. on packet reception ratio which is one of the highest QoS Awareness parameters. This paper discusses the two important functions of routing in wireless sensor networks (WSN) namely route selection using least routing time algorithm (LRTA) and data forwarding using clustering technique. Finally, the simulation result reveals that LRTA performs relatively better than the existing system in terms of average packet reception ratio and connectivity.

  20. Multi-agent: a technique to implement geo-visualization of networked virtual reality

    Science.gov (United States)

    Lin, Zhiyong; Li, Wenjing; Meng, Lingkui

    2007-06-01

    Networked Virtual Reality (NVR) is a system based on net connected and spatial information shared, whose demands cannot be fully meet by the existing architectures and application patterns of VR to some extent. In this paper, we propose a new architecture of NVR based on Multi-Agent framework. which includes the detailed definition of various agents and their functions and full description of the collaboration mechanism, Through the prototype system test with DEM Data and 3D Models Data, the advantages of Multi-Agent based Networked Virtual Reality System in terms of the data loading time, user response time and scene construction time etc. are verified. First, we introduce the characters of Networked Virtual Realty and the characters of Multi-Agent technique in Section 1. Then we give the architecture design of Networked Virtual Realty based on Multi-Agent in Section 2.The Section 2 content includes the rule of task division, the multi-agent architecture design to implement Networked Virtual Realty and the function of agents. Section 3 shows the prototype implementation according to the design. Finally, Section 4 discusses the benefits of using Multi-Agent to implement geovisualization of Networked Virtual Realty.