WorldWideScience

Sample records for network simulation testbed

  1. Network testbed creation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.; Watts, Kristopher K.; Sweeney, Andrew John

    2017-03-21

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices, embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.

  2. Network testbed creation and validation

    Science.gov (United States)

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.; Watts, Kristopher K.; Sweeney, Andrew John

    2017-04-18

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices, embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.

  3. Wireless Sensor Networks TestBed: ASNTbed

    CSIR Research Space (South Africa)

    Dludla, AG

    2013-05-01

    Full Text Available Wireless sensor networks (WSNs) have been used in different types of applications and deployed within various environments. Simulation tools are essential for studying WSNs, especially for exploring large-scale networks. However, WSN testbeds...

  4. Optical Network Testbeds Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Joe Mambretti

    2007-06-01

    This is the summary report of the third annual Optical Networking Testbed Workshop (ONT3), which brought together leading members of the international advanced research community to address major challenges in creating next generation communication services and technologies. Networking research and development (R&D) communities throughout the world continue to discover new methods and technologies that are enabling breakthroughs in advanced communications. These discoveries are keystones for building the foundation of the future economy, which requires the sophisticated management of extremely large qualities of digital information through high performance communications. This innovation is made possible by basic research and experiments within laboratories and on specialized testbeds. Initial network research and development initiatives are driven by diverse motives, including attempts to solve existing complex problems, the desire to create powerful new technologies that do not exist using traditional methods, and the need to create tools to address specific challenges, including those mandated by large scale science or government agency mission agendas. Many new discoveries related to communications technologies transition to wide-spread deployment through standards organizations and commercialization. These transition paths allow for new communications capabilities that drive many sectors of the digital economy. In the last few years, networking R&D has increasingly focused on advancing multiple new capabilities enabled by next generation optical networking. Both US Federal networking R&D and other national R&D initiatives, such as those organized by the National Institute of Information and Communications Technology (NICT) of Japan are creating optical networking technologies that allow for new, powerful communication services. Among the most promising services are those based on new types of multi-service or hybrid networks, which use new optical networking

  5. Use of Tabu Search in a Solver to Map Complex Networks onto Emulab Testbeds

    National Research Council Canada - National Science Library

    MacDonald, Jason E

    2007-01-01

    The University of Utah's solver for the testbed mapping problem uses a simulated annealing metaheuristic algorithm to map a researcher's experimental network topology onto available testbed resources...

  6. Reproducing and Extending Real Testbed Evaluation of GeoNetworking Implementation in Simulated Networks

    OpenAIRE

    Tao, Ye; Tsukada, Manabu; LI, Xin; Kakiuchi, Masatoshi; Esaki, Hiroshi

    2016-01-01

    International audience; Vehicular Ad-hoc Network (VANET) is a type of Mobile Ad-hoc Network (MANET) which is specialized for vehicle communication. GeoNetworking is a new standardized network layer protocol for VANET which employs geolocation based routing. However, conducting large scale experiments in GeoNetworking softwares is extremely difficult, since it requires many extra factors such as vehicles, stuff, place, terrain, etc. In this paper, we propose a method to reproduce realistic res...

  7. Technology Developments Integrating a Space Network Communications Testbed

    Science.gov (United States)

    Kwong, Winston; Jennings, Esther; Clare, Loren; Leang, Dee

    2006-01-01

    As future manned and robotic space explorations missions involve more complex systems, it is essential to verify, validate, and optimize such systems through simulation and emulation in a low cost testbed environment. The goal of such a testbed is to perform detailed testing of advanced space and ground communications networks, technologies, and client applications that are essential for future space exploration missions. We describe the development of new technologies enhancing our Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) that enable its integration in a distributed space communications testbed. MACHETE combines orbital modeling, link analysis, and protocol and service modeling to quantify system performance based on comprehensive considerations of different aspects of space missions. It can simulate entire networks and can interface with external (testbed) systems. The key technology developments enabling the integration of MACHETE into a distributed testbed are the Monitor and Control module and the QualNet IP Network Emulator module. Specifically, the Monitor and Control module establishes a standard interface mechanism to centralize the management of each testbed component. The QualNet IP Network Emulator module allows externally generated network traffic to be passed through MACHETE to experience simulated network behaviors such as propagation delay, data loss, orbital effects and other communications characteristics, including entire network behaviors. We report a successful integration of MACHETE with a space communication testbed modeling a lunar exploration scenario. This document is the viewgraph slides of the presentation.

  8. Simulation and experimental testbed for adaptive video streaming in ad hoc networks

    OpenAIRE

    Gonzalez-Martinez, Santiago Renan; Castellanos Hernández, Wilder Eduardo; Guzman Castillo, Paola Fernanda; Arce Vila, Pau; Guerri Cebollada, Juan Carlos

    2016-01-01

    This paper presents a performance evaluation of the scalable video streaming over mobile ad hoc networks. In particular, we focus on the rate-adaptive method for streaming scalable video (H.264/SVC). For effective adaptation a new cross-layer routing protocol is introduced. This protocol provides an efficient algorithm for available bandwidth estimation. With this information, the video source adjusts its bit rate during the video transmission according to the network state. We also propose a...

  9. Cognitive Optical Network Testbed: EU Project CHRON

    DEFF Research Database (Denmark)

    Borkowski, Robert; Duran, Ramon J.; Kachris, Christoforos

    2015-01-01

    The aim of cognition in optical networks is to introduce intelligence into the control plane that allows for autonomous end-to-end performance optimization and minimization of required human intervention, particularly targeted at heterogeneous network scenarios. A cognitive network observes, learns......, and makes informed decisions based on its current status and knowledge about past decisions and their results. To test the operation of cognitive algorithms in real time, we created the first operational testbed of a cognitive optical network based on the Cognitive Heterogeneous Reconfigurable Optical...... Network (CHRON) architecture. In this experiment, an intelligent control plane, enabled by a cognitive decision system (CDS), was successfully combined with a flexible data plane. The testbed was used to test and validate different scenarios, demonstrating benefits obtained by network cognition...

  10. SSERVI Analog Regolith Simulant Testbed Facility

    Science.gov (United States)

    Minafra, J.; Schmidt, G. K.

    2016-12-01

    SSERVI's goals include supporting planetary researchers within NASA, other government agencies; private sector and hardware developers; competitors in focused prize design competitions; and academic sector researchers. The SSERVI Analog Regolith Simulant Testbed provides opportunities for research scientists and engineers to study the effects of regolith analog testbed research in the planetary exploration field. This capability is essential to help to understand the basic effects of continued long-term exposure to a simulated analog test environment. The current facility houses approximately eight tons of JSC-1A lunar regolith simulant in a test bin consisting of a 4 meter by 4 meter area. SSERVI provides a bridge between several groups, joining together researchers from: 1) scientific and exploration communities, 2) multiple disciplines across a wide range of planetary sciences, and 3) domestic and international communities and partnerships. This testbed provides a means of consolidating the tasks of acquisition, storage and safety mitigation in handling large quantities of regolith simulant Facility hardware and environment testing scenarios include, but are not limited to the following; Lunar surface mobility, Dust exposure and mitigation, Regolith handling and excavation, Solar-like illumination, Lunar surface compaction profile, Lofted dust, Mechanical properties of lunar regolith, and Surface features (i.e. grades and rocks) Numerous benefits vary from easy access to a controlled analog regolith simulant testbed, and planetary exploration activities at NASA Research Park, to academia and expanded commercial opportunities in California's Silicon Valley, as well as public outreach and education opportunities.

  11. SSERVI Analog Regolith Simulant Testbed Facility

    Science.gov (United States)

    Minafra, Joseph; Schmidt, Gregory; Bailey, Brad; Gibbs, Kristina

    2016-10-01

    The Solar System Exploration Research Virtual Institute (SSERVI) at NASA's Ames Research Center in California's Silicon Valley was founded in 2013 to act as a virtual institute that provides interdisciplinary research centered on the goals of its supporting directorates: NASA Science Mission Directorate (SMD) and the Human Exploration & Operations Mission Directorate (HEOMD).Primary research goals of the Institute revolve around the integration of science and exploration to gain knowledge required for the future of human space exploration beyond low Earth orbit. SSERVI intends to leverage existing JSC1A regolith simulant resources into the creation of a regolith simulant testbed facility. The purpose of this testbed concept is to provide the planetary exploration community with a readily available capability to test hardware and conduct research in a large simulant environment.SSERVI's goals include supporting planetary researchers within NASA, other government agencies; private sector and hardware developers; competitors in focused prize design competitions; and academic sector researchers.SSERVI provides opportunities for research scientists and engineers to study the effects of regolith analog testbed research in the planetary exploration field. This capability is essential to help to understand the basic effects of continued long-term exposure to a simulated analog test environment.The current facility houses approximately eight tons of JSC-1A lunar regolith simulant in a test bin consisting of a 4 meter by 4 meter area, including dust mitigation and safety oversight.Facility hardware and environment testing scenarios could include, Lunar surface mobility, Dust exposure and mitigation, Regolith handling and excavation, Solar-like illumination, Lunar surface compaction profile, Lofted dust, Mechanical properties of lunar regolith, Surface features (i.e. grades and rocks)Numerous benefits vary from easy access to a controlled analog regolith simulant testbed, and

  12. Easy as Pi: A Network Coding Raspberry Pi Testbed

    DEFF Research Database (Denmark)

    W. Sørensen, Chres; Hernandez Marcano, Nestor Javier; Cabrera Guerrero, Juan A.

    2016-01-01

    of the hardware, but also due to maintenance challenges. In this paper, we present the required key steps to design, setup and maintain an inexpensive testbed using Raspberry Pi devices for communications and storage networks with network coding capabilities. This testbed can be utilized for any applications...

  13. Termite: Emulation Testbed for Encounter Networks

    Directory of Open Access Journals (Sweden)

    Rodrigo Bruno

    2015-08-01

    Full Text Available Cutting-edge mobile devices like smartphones and tablets are equipped with various infrastructureless wireless interfaces, such as WiFi Direct and Bluetooth. Such technologies allow for novel mobile applications that take advantage of casual encounters between co-located users. However, the need to mimic the behavior of real-world encounter networks makes testing and debugging of such applications hard tasks. We present Termite, an emulation testbed for encounter networks. Our system allows developers to run their applications on a virtual encounter network emulated by software. Developers can model arbitrary encounter networks and specify user interactions on the emulated virtual devices. To facilitate testing and debugging, developers can place breakpoints, inspect the runtime state of virtual nodes, and run experiments in a stepwise fashion. Termite defines its own Petri Net variant to model the dynamically changing topology and synthesize user interactions with virtual devices. The system is designed to efficiently multiplex an underlying emulation hosting infrastructure across multiple developers, and to support heterogeneous mobile platforms. Our current system implementation supports virtual Android devices communicating over WiFi Direct networks and runs on top of a local cloud infrastructure. We evaluated our system using emulator network traces, and found that Termite is expressive and performs well.

  14. Virtual Network Computing Testbed for Cybersecurity Research

    Science.gov (United States)

    2015-08-17

    Standard Form 298 (Rev 8/98) Prescribed by ANSI Std. Z39.18 212-346-1012 W911NF-12-1-0393 61504-CS-RIP.2 Final Report a. REPORT 14. ABSTRACT 16...Technology, 2007. [8] Pullen, J. M., 2000. The network workbench : network simulation software for academic investigation of Internet concepts. Comput

  15. In-Space Networking on NASA's SCAN Testbed

    Science.gov (United States)

    Brooks, David E.; Eddy, Wesley M.; Clark, Gilbert J.; Johnson, Sandra K.

    2016-01-01

    The NASA Space Communications and Navigation (SCaN) Testbed, an external payload onboard the International Space Station, is equipped with three software defined radios and a flight computer for supporting in-space communication research. New technologies being studied using the SCaN Testbed include advanced networking, coding, and modulation protocols designed to support the transition of NASAs mission systems from primarily point to point data links and preplanned routes towards adaptive, autonomous internetworked operations needed to meet future mission objectives. Networking protocols implemented on the SCaN Testbed include the Advanced Orbiting Systems (AOS) link-layer protocol, Consultative Committee for Space Data Systems (CCSDS) Encapsulation Packets, Internet Protocol (IP), Space Link Extension (SLE), CCSDS File Delivery Protocol (CFDP), and Delay-Tolerant Networking (DTN) protocols including the Bundle Protocol (BP) and Licklider Transmission Protocol (LTP). The SCaN Testbed end-to-end system provides three S-band data links and one Ka-band data link to exchange space and ground data through NASAs Tracking Data Relay Satellite System or a direct-to-ground link to ground stations. The multiple data links and nodes provide several upgradable elements on both the space and ground systems. This paper will provide a general description of the testbeds system design and capabilities, discuss in detail the design and lessons learned in the implementation of the network protocols, and describe future plans for continuing research to meet the communication needs for evolving global space systems.

  16. Towards a Perpetual Sensor Network Testbed without Backchannel

    DEFF Research Database (Denmark)

    Johansen, Aslak; Bonnet, Philippe; Sørensen, Thomas

    2012-01-01

    The sensor network testbeds available today rely on a communication channel different from the mote radio - a backchannel - to facilitate mote reprogramming, health monitoring and performance analysis. Such backchannels are either supported as wired communication channels (USB or Ethernet), or via...... an extra board coupled to the mote under test and thus introduce significant constraint on the testbed setup and on the mote power budget. With Greenlab, we wish to study the performance behavior of outdoor sensor networks where energy is harvested from the environment. We thus cannot rely on a backchannel...... participating in a user-defined experiment. We evaluate performance by analyzing the overhead our approach introduces....

  17. Delay Tolerant Networking on NASA's Space Communication and Navigation Testbed

    Science.gov (United States)

    Johnson, Sandra; Eddy, Wesley

    2016-01-01

    This presentation covers the status of the implementation of an open source software that implements the specifications developed by the CCSDS Working Group. Interplanetary Overlay Network (ION) is open source software and it implements specifications that have been developed by two international working groups through IETF and CCSDS. ION was implemented on the SCaN Testbed, a testbed located on an external pallet on ISS, by the GRC team. The presentation will cover the architecture of the system, high level implementation details, and issues porting ION to VxWorks.

  18. Easy as Pi: A Network Coding Raspberry Pi Testbed

    Directory of Open Access Journals (Sweden)

    Chres W. Sørensen

    2016-10-01

    Full Text Available In the near future, upcoming communications and storage networks are expected to tolerate major difficulties produced by huge amounts of data being generated from the Internet of Things (IoT. For these types of networks, strategies and mechanisms based on network coding have appeared as an alternative to overcome these difficulties in a holistic manner, e.g., without sacrificing the benefit of a given network metric when improving another. There has been recurrent issues on: (i making large-scale deployments akin to the Internet of Things; (ii assessing and (iii replicating the obtained results in preliminary studies. Therefore, finding testbeds that can deal with large-scale deployments and not lose historic data in order to evaluate these mechanisms are greatly needed and desirable from a research perspective. However, this can be hard to manage, not only due to the inherent costs of the hardware, but also due to maintenance challenges. In this paper, we present the required key steps to design, setup and maintain an inexpensive testbed using Raspberry Pi devices for communications and storage networks with network coding capabilities. This testbed can be utilized for any applications requiring results replicability.

  19. Development of a Testbed for Wireless Underground Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mehmet C. Vuran

    2010-01-01

    Full Text Available Wireless Underground Sensor Networks (WUSNs constitute one of the promising application areas of the recently developed wireless sensor networking techniques. WUSN is a specialized kind of Wireless Sensor Network (WSN that mainly focuses on the use of sensors that communicate through soil. Recent models for the wireless underground communication channel are proposed but few field experiments were realized to verify the accuracy of the models. The realization of field WUSN experiments proved to be extremely complex and time-consuming in comparison with the traditional wireless environment. To the best of our knowledge, this is the first work that proposes guidelines for the development of an outdoor WUSN testbed with the goals of improving the accuracy and reducing of time for WUSN experiments. Although the work mainly aims WUSNs, many of the presented practices can also be applied to generic WSN testbeds.

  20. ASE-BAN, a Wireless Body Area Network Testbed

    DEFF Research Database (Denmark)

    Madsen, Jens Kargaard; Karstoft, Henrik; Toftegaard, Thomas Skjødeberg

    2010-01-01

    /actuators attached to the body and a host server application. The gateway uses the BlackFin BF533 processor from Analog Devices, and uses Bluetooth for wireless communication. Two types of sensors are attached to the network: an electro-cardio-gram sensor and an oximeter sensor. The testbed has been successfully......Miniature Body Area Networks used in health care support greater mobility to patients and reduces actual hospitalization. This paper presents the preliminary implementation of a wireless body area network gateway. It is designed to implement the gateway functionality between sensors...... tested for electrocardio- gram data collection, and using wireless communication in a battery powered configuration....

  1. A MIMO-OFDM Testbed for Wireless Local Area Networks

    Directory of Open Access Journals (Sweden)

    Conrat Jean-Marc

    2006-01-01

    Full Text Available We describe the design steps and final implementation of a MIMO OFDM prototype platform developed to enhance the performance of wireless LAN standards such as HiperLAN/2 and 802.11, using multiple transmit and multiple receive antennas. We first describe the channel measurement campaign used to characterize the indoor operational propagation environment, and analyze the influence of the channel on code design through a ray-tracing channel simulator. We also comment on some antenna and RF issues which are of importance for the final realization of the testbed. Multiple coding, decoding, and channel estimation strategies are discussed and their respective performance-complexity trade-offs are evaluated over the realistic channel obtained from the propagation studies. Finally, we present the design methodology, including cross-validation of the Matlab, C++, and VHDL components, and the final demonstrator architecture. We highlight the increased measured performance of the MIMO testbed over the single-antenna system.

  2. Evolution of a Simulation Testbed into an Operational Tool

    Science.gov (United States)

    Sheth, Kapil; Bilimoria, Karl D.; Sridhar, Banavar; Sterenchuk, Mike; Niznik, Tim; O'Neill, Tom; Clymer, Alexis; Gutierrez Nolasco, Sebastian; Edholm, Kaj; Shih, Fu-Tai

    2017-01-01

    This paper describes the evolution over a 20-year period of the Future ATM (Air Traffic Management) Concepts Evaluation Tool (FACET) from a National Airspace System (NAS) based simulation testbed into an operational tool. FACET was developed as a testbed for assessing futuristic ATM concepts, e.g., automated conflict detection and resolution. NAS Constraint Evaluation and Notification Tool (NASCENT) is an application, within FACET, for alerting airspace users of inefficiencies in flight operations and advising time- and fuel-saving reroutes.It is currently in use at American Airlines Integrated Operations Center in Fort Worth, TX. The concepts assessed,research conducted, and the operational capability developed, along with the NASA support and achievements are presented in this paper.

  3. A Dynamically Reconfigurable Wireless Sensor Network Testbed for Multiple Routing Protocols

    Directory of Open Access Journals (Sweden)

    Wenxian Jiang

    2017-01-01

    Full Text Available Because wireless sensor networks (WSNs are complex and difficult to deploy and manage, appropriate structures are required to make these networks more flexible. In this paper, a reconfigurable testbed is presented, which supports dynamic protocol switching by creating a novel architecture and experiments with several different protocols. The separation of the control and data planes in this testbed means that routing configuration and data transmission are independent. A programmable flow table provides the testbed with the ability to switch protocols dynamically. We experiment on various aspects of the testbed to analyze its functionality and performance. The results demonstrate that sensors in the testbed are easy to manage and can support multiple protocols. We then raise some important issues that should be investigated in future work concerning the testbed.

  4. Wireless sensor network testbed: A gateway for future solutions

    CSIR Research Space (South Africa)

    Abu-Mahfouz, Adnan M

    2012-10-01

    Full Text Available MI2T-WSN is a multi-level infrastructure of interconnected testbeds of large-scale WSNs. MI2T-WSN consists of 1000 sensor motes that will be distributed into four different testbeds. The variations of these testbeds will allow for the implemention...

  5. High Precision Testbed to Evaluate Ethernet Performance for In-Car Networks

    DEFF Research Database (Denmark)

    Revsbech, Kasper; Madsen, Tatiana Kozlova; Schiøler, Henrik

    2012-01-01

    Validating safety-critical real-time systems such as in-car networks often involves a model-based performance analysis of the network. An important issue performing such analysis is to provide precise model parameters, matching the actual equipment. One way to obtain such parameters is to derive...... them by measurements of the equipment. In this work we describe the design of a testbed enabling active measurements on up to 1 [Gb=Sec] Copper based Ethernet Switches. By use of the testbed it self, we conduct a series of tests where the precision of the testbed is estimated. We find a maximum error...

  6. Decision Support Tool and Simulation Testbed for Airborne Spacing and Merging in Super Dense Operations Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation in this effort is the development of a decision support tool and simulation testbed for Airborne Spacing and Merging (ASM). We focus on concepts...

  7. A Simulation Testbed for Dynamic Air Corridors within the Next Generation Air Transportation System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation in this effort is the development of a simulation testbed for identifying dynamic air corridors that can increase aircraft throughput in and...

  8. Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Programs - San Mateo Testbed Analysis Plan [supporting datasets - San Mateo Testbed

    Science.gov (United States)

    2017-06-26

    This zip file contains files of data to support FHWA-JPO-16-370, Analysis, Modeling, and Simulation (AMS) Testbed Development and Evaluation to Support Dynamic Mobility Applications (DMA) and Active Transportation and Demand Management (ATDM) Program...

  9. Real-Time Emulation of Heterogeneous Wireless Networks with End-to-Edge Quality of Service Guarantees: The AROMA Testbed

    Directory of Open Access Journals (Sweden)

    Anna Umbert

    2010-01-01

    Full Text Available This work presents and describes the real-time testbed for all-IP Beyond 3G (B3G heterogeneous wireless networks that has been developed in the framework of the European IST AROMA project. The main objective of the AROMA testbed is to provide a highly accurate and realistic framework where the performance of algorithms, policies, protocols, services, and applications for a complete heterogeneous wireless network can be fully assessed and evaluated before bringing them to a real system. The complexity of the interaction between all-IP B3G systems and user applications, while dealing with the Quality of Service (QoS concept, motivates the development of this kind of emulation platform where different solutions can be tested in realistic conditions that could not be achieved by means of simple offline simulations. This work provides an in-depth description of the AROMA testbed, emphasizing many interesting implementation details and lessons learned during the development of the tool that may result helpful to other researchers and system engineers in the development of similar emulation platforms. Several case studies are also presented in order to illustrate the full potential and capabilities of the presented emulation platform.

  10. Deployment of a Testbed in a Brazilian Research Network using IPv6 and Optical Access Technologies

    Science.gov (United States)

    Martins, Luciano; Ferramola Pozzuto, João; Olimpio Tognolli, João; Chaves, Niudomar Siqueira De A.; Reggiani, Atilio Eduardo; Hortêncio, Claudio Antonio

    2012-04-01

    This article presents the implementation of a testbed and the experimental results obtained with it on the Brazilian Experimental Network of the government-sponsored "GIGA Project." The use of IPv6 integrated to current and emerging optical architectures and technologies, such as dense wavelength division multiplexing and 10-gigabit Ethernet on the core and gigabit capable passive optical network and optical distribution network on access, were tested. These protocols, architectures, and optical technologies are promising and part of a brand new worldwide technological scenario that has being fairly adopted in the networks of enterprises and providers of the world.

  11. ASE-BAN, a Wireless Body Area Network Testbed

    DEFF Research Database (Denmark)

    Madsen, Jens Kargaard; Karstoft, Henrik; Hansen, Finn Overgaard

    2010-01-01

    /actuators attached to the body and a host server application. The gateway uses the BlackFin BF533 processor from Analog Devices, and uses Bluetooth for wireless communication. Two types of sensors are attached to the network: an electro-cardio-gram sensor and an oximeter sensor. The testbed has been successfully...... tested for electro-cardio-gram data collection, and using wireless communication in a battery powered configuration....

  12. On-line Configuration of Network Emulator for Intelligent Energy System Testbed Applications

    DEFF Research Database (Denmark)

    Kemal, Mohammed Seifu; Iov, Florin; Olsen, Rasmus Løvenstein

    2015-01-01

    interface for visualizing, configuring and monitoring of the emulated scenarios and a network socket linking the graphic server and network emulation server on-line. Specifically, our focus area is to build a model that gives us ability to look at some of the challenges on implementing inter...... a mechanism for on-line configuration and monitoring of heterogeneous communication technologies implemented at smart energy system testbed of Aal-borg university. It proposes a model with three main components, a network emulator used to emulate the communication scenarios using KauNet, graphical user...

  13. A smart grid simulation testbed using Matlab/Simulink

    Science.gov (United States)

    Mallapuram, Sriharsha; Moulema, Paul; Yu, Wei

    2014-06-01

    The smart grid is the integration of computing and communication technologies into a power grid with a goal of enabling real time control, and a reliable, secure, and efficient energy system [1]. With the increased interest of the research community and stakeholders towards the smart grid, a number of solutions and algorithms have been developed and proposed to address issues related to smart grid operations and functions. Those technologies and solutions need to be tested and validated before implementation using software simulators. In this paper, we developed a general smart grid simulation model in the MATLAB/Simulink environment, which integrates renewable energy resources, energy storage technology, load monitoring and control capability. To demonstrate and validate the effectiveness of our simulation model, we created simulation scenarios and performed simulations using a real-world data set provided by the Pecan Street Research Institute.

  14. A Wireless Testbed Development for a Telediagnosis and Telemammography Network

    Science.gov (United States)

    2007-01-01

    detect microcalcifications in mammograms to help doctors detect Breast Cancer . The technologies that needed to be researched included creating GUIs in...engineering in the cure of breast cancer , specifically, developing communications networks and imaging processing techniques for the early detection...and diagnosis of breast cancer . Providing mammographic services to women in underserved areas via telemammography is very important. With remote

  15. Investigation of Asymmetric Thrust Detection with Demonstration in a Real-Time Simulation Testbed

    Science.gov (United States)

    Chicatelli, Amy K.; Rinehart, Aidan W.; Sowers, T. Shane; Simon, Donald L.

    2016-01-01

    The purpose of this effort is to develop, demonstrate, and evaluate three asymmetric thrust detection approaches to aid in the reduction of asymmetric thrust-induced aviation accidents. This paper presents the results from that effort and their evaluation in simulation studies, including those from a real-time flight simulation testbed. Asymmetric thrust is recognized as a contributing factor in several Propulsion System Malfunction plus Inappropriate Crew Response (PSM+ICR) aviation accidents. As an improvement over the state-of-the-art, providing annunciation of asymmetric thrust to alert the crew may hold safety benefits. For this, the reliable detection and confirmation of asymmetric thrust conditions is required. For this work, three asymmetric thrust detection methods are presented along with their results obtained through simulation studies. Representative asymmetric thrust conditions are modeled in simulation based on failure scenarios similar to those reported in aviation incident and accident descriptions. These simulated asymmetric thrust scenarios, combined with actual aircraft operational flight data, are then used to conduct a sensitivity study regarding the detection capabilities of the three methods. Additional evaluation results are presented based on pilot-in-the-loop simulation studies conducted in the NASA Glenn Research Center (GRC) flight simulation testbed. Data obtained from this flight simulation facility are used to further evaluate the effectiveness and accuracy of the asymmetric thrust detection approaches. Generally, the asymmetric thrust conditions are correctly detected and confirmed.

  16. James Webb Space Telescope Optical Simulation Testbed: Segmented Mirror Phase Retrieval Testing

    Science.gov (United States)

    Laginja, Iva; Egron, Sylvain; Brady, Greg; Soummer, Remi; Lajoie, Charles-Philippe; Bonnefois, Aurélie; Long, Joseph; Michau, Vincent; Choquet, Elodie; Ferrari, Marc; Leboulleux, Lucie; Mazoyer, Johan; N’Diaye, Mamadou; Perrin, Marshall; Petrone, Peter; Pueyo, Laurent; Sivaramakrishnan, Anand

    2018-01-01

    The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a hardware simulator designed to produce JWST-like images. A model of the JWST three mirror anastigmat is realized with three lenses in form of a Cooke Triplet, which provides JWST-like optical quality over a field equivalent to a NIRCam module, and an Iris AO segmented mirror with hexagonal elements is standing in for the JWST segmented primary. This setup successfully produces images extremely similar to NIRCam images from cryotesting in terms of the PSF morphology and sampling relative to the diffraction limit.The testbed is used for staff training of the wavefront sensing and control (WFS&C) team and for independent analysis of WFS&C scenarios of the JWST. Algorithms like geometric phase retrieval (GPR) that may be used in flight and potential upgrades to JWST WFS&C will be explored. We report on the current status of the testbed after alignment, implementation of the segmented mirror, and testing of phase retrieval techniques.This optical bench complements other work at the Makidon laboratory at the Space Telescope Science Institute, including the investigation of coronagraphy for segmented aperture telescopes. Beyond JWST we intend to use JOST for WFS&C studies for future large segmented space telescopes such as LUVOIR.

  17. LTE-Enhanced Cognitive Radio Network Testbed (LTE-CORNET)

    Science.gov (United States)

    2016-11-01

    workstation. The open-source LTE library srsLTE is available on a third PC. A fourth PC can be used to implement interference waveforms, among others. The RF...software-defined LTE systems, Amarisoft’s LTE100, are installed on two PCs and one mobile workstation. The open-source LTE library srsLTE is available on...interface. Figure 7. Omni-directional, ceiling mounted antenna [3]. Figure 8. UEs. Figure 9. Internal network layout. Figure 10: Sample SSH (putty

  18. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - San Mateo testbed analysis plan : final report.

    Science.gov (United States)

    2016-06-29

    The primary objective of this project is to develop multiple simulation testbeds/transportation models to : evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation : management (ATDM) strategies. The outpu...

  19. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs -- calibration report for San Mateo testbed.

    Science.gov (United States)

    2016-08-22

    The primary objective of this project is to develop multiple simulation Testbeds/transportation models to evaluate the impacts of DMA connected vehicle applications and the active and dynamic transportation management (ATDM) strategies. The outputs (...

  20. Developing a virtualised testbed environment in preparation for testing of network based attacks

    CSIR Research Space (South Africa)

    Van Heerden, RP

    2013-11-01

    Full Text Available the authors to reset the simulation environment before each test and mitigated against the damage that an attack potentially inflicts on the test network. Without simulated network traffic, the virtualised network was too sterile. This resulted in any network...

  1. Packet Tracer network simulator

    CERN Document Server

    Jesin, A

    2014-01-01

    A practical, fast-paced guide that gives you all the information you need to successfully create networks and simulate them using Packet Tracer.Packet Tracer Network Simulator is aimed at students, instructors, and network administrators who wish to use this simulator to learn how to perform networking instead of investing in expensive, specialized hardware. This book assumes that you have a good amount of Cisco networking knowledge, and it will focus more on Packet Tracer rather than networking.

  2. Experimental validation of optical layer performance monitoring using an all-optical network testbed

    Science.gov (United States)

    Vukovic, Alex; Savoie, Michel J.; Hua, Heng

    2004-11-01

    Communication transmission systems continue to evolve towards higher data rates, increased wavelength densities, longer transmission distances and more intelligence. Further development of dense wavelength division multiplexing (DWDM) and all-optical networks (AONs) will demand ever-tighter monitoring to assure a specified quality of service (QoS). Traditional monitoring methods have been proven to be insufficient. Higher degree of self-control, intelligence and optimization for functions within next generation networks require new monitoring schemes to be developed and deployed. Both perspective and challenges of performance monitoring, its techniques, requirements and drivers are discussed. It is pointed out that optical layer monitoring is a key enabler for self-control of next generation optical networks. Aside from its real-time feedback and the safeguarding of neighbouring channels, optical performance monitoring ensures the ability to build and control complex network topologies while maintaining an efficiently high QoS. Within an all-optical network testbed environment, key performance monitoring parameters are identified, assessed through real-time proof-of-concept, and proposed for network applications for the safeguarding of neighbouring channels in WDM systems.

  3. MSFC Robotic Lunar Lander Testbed and Current Status of the International Lunar Network (ILN) Anchor Nodes Mission

    Science.gov (United States)

    Cohen, Barbara; Bassler, Julie; Harris, Danny; Morse, Brian; Reed, Cheryl; Kirby, Karen; Eng, Douglas

    2009-01-01

    The lunar lander robotic exploration testbed at Marshall Spaceflight Center provides a test environment for robotic lander test articles, components and algorithms to reduce the risk on the airless body designs during lunar landing. Also included is a chart comparing the two different types of Anchor nodes for the International Lunar Network (ILN): Solar/Battery and the Advanced Stirling Radioisotope generator (ARSG.)

  4. Optimizing Electric Vehicle Coordination Over a Heterogeneous Mesh Network in a Scaled-Down Smart Grid Testbed

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Lévesque, Martin; Maier, Martin

    2015-01-01

    , smart grid (SG) is still at the developmental stage to address those issues. In this regard, a smart grid testbed (SGT) is desirable to develop, analyze, and demonstrate various novel SG solutions, namely demand response, real-time pricing, and congestion management. In this paper, a novel SGT...... is developed in a laboratory by scaling a 250 kVA, 0.4 kV real low-voltage distribution feeder down to 1 kVA, 0.22 kV. Information and communication technology is integrated in the scaled-down network to establish real-time monitoring and control. The novelty of the developed testbed is demonstrated...

  5. A Rural Next Generation Network (R-NGN and Its Testbed

    Directory of Open Access Journals (Sweden)

    Armein Z. R. Langi

    2013-09-01

    Full Text Available Rural Next Generation Networks (R-NGN technology allows Internet protocol (IP based systems to be used in rural areas. This paper reports a testbed of R-NGN that uses low cost Ethernet radio links, combined with media gateways and a softswitch. The network consists of point-to-point IP Ethernet 2.4 GHz wireless link, IP switches and gateways in each community, standard copper wires and telephone sets for users. It uses low power consumption, and suitable for low density users. This combination allows low cost systems as well as multiservices (voice, data, and multimedia for rural communications. An infrastructure has been deployed in two communities in Cipicung Girang, a village 10 km outside Bandung city, Indonesia. Two towers link the communities with a network of Institut Teknologi Bandung (ITB campus. In addition, local wirelines connect community houses to the network. Currently there are four houses connected to each community node (for a total of eight house, upon which we can perform various tests and measurements.

  6. A Rural Next Generation Network (R-NGN and Its Testbed

    Directory of Open Access Journals (Sweden)

    Armein Z. R. Langi

    2007-05-01

    Full Text Available Rural Next Generation Networks (R-NGN technology allows Internet protocol (IP based systems to be used in rural areas. This paper reports a testbed of R-NGN that uses low cost Ethernet radio links, combined with media gateways and a softswitch. The network consists of point-to-point IP Ethernet 2.4 GHz wireless link, IP switches and gateways in each community, standard copper wires and telephone sets for users. It uses low power consumption, and suitable for low density users. This combination allows low cost systems as well as multiservices (voice, data, and multimedia for rural communications. An infrastructure has been deployed in two communities in Cipicung Girang, a village 10 km outside Bandung city, Indonesia. Two towers link the communities with a network of Institut Teknologi Bandung (ITB campus. In addition, local wirelines connect community houses to the network. Currently there are four houses connected to each community node (for a total of eight house, upon which we can perform various tests and measurements.

  7. Building a ROS-Based Testbed for Realistic Multi-Robot Simulation: Taking the Exploration as an Example

    Directory of Open Access Journals (Sweden)

    Zhi Yan

    2017-09-01

    Full Text Available While the robotics community agrees that the benchmarking is of high importance to objectively compare different solutions, there are only few and limited tools to support it. To address this issue in the context of multi-robot systems, we have defined a benchmarking process based on experimental designs, which aimed at improving the reproducibility of experiments by making explicit all elements of a benchmark such as parameters, measurements and metrics. We have also developed a ROS (Robot Operating System-based testbed with the goal of making it easy for users to validate, benchmark, and compare different algorithms including coordination strategies. Our testbed uses the MORSE (Modular OpenRobots Simulation Engine simulator for realistic simulation and a computer cluster for decentralized computation. In this paper, we present our testbed in details with the architecture and infrastructure, the issues encountered in implementing the infrastructure, and the automation of the deployment. We also report a series of experiments on multi-robot exploration, in order to demonstrate the capabilities of our testbed.

  8. Implementing Universal Priority DBA Algorithm in PIC based EPON Testbed

    Science.gov (United States)

    Radzi, N. A. M.; Din, N. M.; Al-Mansoori, M. H.; Majid, M. S. A.; Abdullah, F.

    2013-12-01

    Ethernet passive optical network (EPON) is becoming one of the best schemes in the broadband access network as it combines the Ethernet and passive optical network technology. In order to avoid collision in upstream EPON, Universal Dynamic Bandwidth Allocation (UDBA) algorithm is proposed in this paper where it can support a universal priority in the entire EPON system. For proof-of-concept, the UDBA algorithm is implemented inside a proposed peripheral interface controller (PIC) based EPON testbed in order to evaluate the communication protocol involved. To the best of our knowledge, this is the first time PIC is being used in EPON testbed for this purpose. To ensure that the testbed is valid, we compare the result that we achieved via testbed with the results achieved via MATLAB simulation in terms of throughput, delay and fairness. The simulated results show a good agreement with the experimental with some minor expected differences.

  9. Implementation of quantum key distribution network simulation module in the network simulator NS-3

    Science.gov (United States)

    Mehic, Miralem; Maurhart, Oliver; Rass, Stefan; Voznak, Miroslav

    2017-10-01

    As the research in quantum key distribution (QKD) technology grows larger and becomes more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. Due to the specificity of the QKD link which requires optical and Internet connection between the network nodes, to deploy a complete testbed containing multiple network hosts and links to validate and verify a certain network algorithm or protocol would be very costly. Network simulators in these circumstances save vast amounts of money and time in accomplishing such a task. The simulation environment offers the creation of complex network topologies, a high degree of control and repeatable experiments, which in turn allows researchers to conduct experiments and confirm their results. In this paper, we described the design of the QKD network simulation module which was developed in the network simulator of version 3 (NS-3). The module supports simulation of the QKD network in an overlay mode or in a single TCP/IP mode. Therefore, it can be used to simulate other network technologies regardless of QKD.

  10. XUNET experimental high-speed network testbed CRADA 1136, DOE TTI No. 92-MULT-020-B2

    Energy Technology Data Exchange (ETDEWEB)

    Palmer, R.E.

    1996-04-01

    XUNET is a research program with AT&T and other partners to study high-speed wide area communication between local area networks over a backbone using Asynchronous Transfer Mode (ATM) switches. Important goals of the project are to develop software techniques for network control and management, and applications for high-speed networks. The project entails building a testbed between member sites to explore performance issues for mixed network traffic such as congestion control, multimedia communications protocols, segmentation and reassembly of ATM cells, and overall data throughput rates.

  11. Virtual Factory Testbed

    Data.gov (United States)

    Federal Laboratory Consortium — The Virtual Factory Testbed (VFT) is comprised of three physical facilities linked by a standalone network (VFNet). The three facilities are the Smart and Wireless...

  12. Implementation of Motion Simulation Software and Visual-Auditory Electronics for Use in a Low Gravity Robotic Testbed

    Science.gov (United States)

    Martin, William Campbell

    2011-01-01

    The Jet Propulsion Laboratory (JPL) is developing the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) to assist in manned space missions. One of the proposed targets for this robotic vehicle is a near-Earth asteroid (NEA), which typically exhibit a surface gravity of only a few micro-g. In order to properly test ATHLETE in such an environment, the development team has constructed an inverted Stewart platform testbed that acts as a robotic motion simulator. This project focused on creating physical simulation software that is able to predict how ATHLETE will function on and around a NEA. The corresponding platform configurations are calculated and then passed to the testbed to control ATHLETE's motion. In addition, imitation attitude, imitation attitude control thrusters were designed and fabricated for use on ATHLETE. These utilize a combination of high power LEDs and audio amplifiers to provide visual and auditory cues that correspond to the physics simulation.

  13. Report of the Interagency Optical Network Testbeds Workshop 2 September 12-14, 2006 NASA Ames Research Center

    Energy Technology Data Exchange (ETDEWEB)

    Joe Mambretti Richard desJardins

    2006-05-01

    A new generation of optical networking services and technologies is rapidly changing the world of communications. National and international networks are implementing optical services to supplement traditional packet routed services. On September 12-14, 2005, the Optical Network Testbeds Workshop 2 (ONT2), an invitation-only forum hosted by the NASA Research and Engineering Network (NREN) and co-sponsored by the Department of Energy (DOE), was held at NASA Ames Research Center in Mountain View, California. The aim of ONT2 was to help the Federal Large Scale Networking Coordination Group (LSN) and its Joint Engineering Team (JET) to coordinate testbed and network roadmaps describing agency and partner organization views and activities for moving toward next generation communication services based on leading edge optical networks in the 3-5 year time frame. ONT2 was conceived and organized as a sequel to the first Optical Network Testbeds Workshop (ONT1, August 2004, www.nren.nasa.gov/workshop7). ONT1 resulted in a series of recommendations to LSN. ONT2 was designed to move beyond recommendations to agree on a series of “actionable objectives” that would proactively help federal and partner optical network testbeds and advanced research and education (R&E) networks to begin incorporating technologies and services representing the next generation of advanced optical networks in the next 1-3 years. Participants in ONT2 included representatives from innovative prototype networks (Panel A), basic optical network research testbeds (Panel B), and production R&D networks (Panels C and D), including “JETnets,” selected regional optical networks (RONs), international R&D networks, commercial network technology and service providers (Panel F), and senior engineering and R&D managers from LSN agencies and partner organizations. The overall goal of ONT2 was to identify and coordinate short and medium term activities and milestones for researching, developing, identifying

  14. WFIRST Coronagraph Technology Development Testbeds: Status and Recent Testbed Results

    Science.gov (United States)

    Shi, Fang; An, Xin; Balasubramanian, Kunjithapatham; cady, eric; Gordon, Brian; Greer, Frank; Kasdin, N. Jeremy; Kern, Brian; Lam, Raymond; Marx, David; Moody, Dwight; Patterson, Keith; Poberezhskiy, Ilya; mejia prada, camilo; Gersh-Range, Jessica; Eldorado Riggs, A. J.; Seo, Byoung-Joon; Shields, Joel; Sidick, Erkin; Tang, Hong; Trauger, John Terry; Truong, Tuan; White, Victor; Wilson, Daniel; Zhou, Hanying; JPL WFIRST Testbed Team, Princeton University

    2018-01-01

    As a part of technology development for the WFIRST coronagraph instrument (CGI), dedicated testbeds are built and commissioned at JPL. The coronagraph technology development testbeds include the Occulting Mask Coronagraph (OMC) testbed, the Shaped Pupil Coronagraph/Integral Field Spectrograph (SPC/IFS) testbed, and the Vacuum Surface Gauge (VSG) testbed. With its configuration similar to the WFIRST flight coronagraph instrument the OMC testbed consists of two coronagraph modes, Shaped Pupil Coronagraph (SPC) and Hybrid Lyot Coronagraph (HLC), a low order wavefront sensor (LOWFS), and an optical telescope assembly (OTA) simulator which can generate realistic LoS drift and jitter as well as low order wavefront error that would be induced by the WFIRST telescope’s vibration and thermal changes. The SPC/IFS testbed is a dedicated testbed to test the IFS working with a Shaped Pupil Coronagraph while the VSG testbed is for measuring and calibrating the deformable mirrors, a key component used for WFIRST CGI's wavefront control. In this poster, we will describe the testbed functions and status as well as the highlight of the latest testbed results from OMC, SPC/IFS and VSG testbeds.

  15. Networked seduction: a test-bed for the study of strategic communication on the Internet.

    Science.gov (United States)

    Mantovani, F

    2001-02-01

    One of the emerging features of the Internet is its relational and communicative nature: the initial centrality of the information exchange is moving to the building of online relationships, from friendship to romantic and even sexual relationships. The main goal of this paper is to define a theoretical model for the study of seductive processes on the Internet. In particular, taking up the perspective of the user, a shift of focus is proposed: from the description of the development of interpersonal attraction to the investigation of seduction, considered as a strategic communication process. According to the presented model, the key effort of the subjects involved in a computer-mediated seductive interaction is the negotiation of the meaning of the situation they are involved in. This process usually requires two tasks: the analysis of the characteristics of the communicative environment in which the play of interpersonal attraction develops, and the exploitation of the affordances offered by the communicative environment according to specific strategic goals. The main features of the model are both the focus on the communicative tools employed by the users to reach their relational goals, and the ergonomic characteristics of the networked environment. This approach can be used as a test-bed for the definition of specific hypotheses concerning the development of seductive interaction online.

  16. Context-aware local Intrusion Detection in SCADA systems : a testbed and two showcases

    NARCIS (Netherlands)

    Chromik, Justyna Joanna; Haverkort, Boudewijn R.H.M.; Remke, Anne Katharina Ingrid; Pilch, Carina; Brackmann, Pascal; Duhme, Christof; Everinghoff, Franziska; Giberlein, Artur; Teodorowicz, Thomas; Wieland, Julian

    2017-01-01

    This paper illustrates the use of a testbed that we have developed for context-aware local intrusion detection. This testbed is based on the co-simulation framework Mosaik and allows for the validation of local intrusion detection mechanisms at field stations in power distribution networks. For two

  17. Analysis, modeling, and simulation (AMS) testbed development and evaluation to support dynamic mobility applications (DMA) and active transportation and demand management (ATDM) programs - Pasadena calibration report : draft report.

    Science.gov (United States)

    2017-03-01

    The primary objective of this project is to develop multiple simulation testbeds/transportation models to : evaluate the impacts of DMA connected vehicle applications and the active transportation and demand : management (ATDM) strategies. The primar...

  18. Computer graphics testbed to simulate and test vision systems for space applications

    Science.gov (United States)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  19. James Webb Space Telescope optical simulation testbed IV: linear control alignment of the primary segmented mirror

    Science.gov (United States)

    Egron, Sylvain; Soummer, Rémi; Lajoie, Charles-Philippe; Bonnefois, Aurélie; Long, Joseph; Michau, Vincent; Choquet, Elodie; Ferrari, Marc; Leboulleux, Lucie; Levecq, Olivier; Mazoyer, Johan; N'Diaye, Mamadou; Perrin, Marshall; Petrone, Peter; Pueyo, Laurent; Sivaramakrishnan, Anand

    2017-09-01

    The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a tabletop experiment designed to study wavefront sensing and control for a segmented space telescope, such as JWST. With the JWST Science and Operations Center co-located at STScI, JOST was developed to provide both a platform for staff training and to test alternate wavefront sensing and control strategies for independent validation or future improvements beyond the baseline operations. The design of JOST reproduces the physics of JWST's three-mirror anastigmat (TMA) using three custom aspheric lenses. It provides similar quality image as JWST (80% Strehl ratio) over a field equivalent to a NIRCam module, but at 633 nm. An Iris AO segmented mirror stands for the segmented primary mirror of JWST. Actuators allow us to control (1) the 18 segments of the segmented mirror in piston, tip, tilt and (2) the second lens, which stands for the secondary mirror, in tip, tilt and x, y, z positions. We present the most recent experimental results for the segmented mirror alignment. Our implementation of the Wavefront Sensing (WFS) algorithms using phase diversity is tested on simulation and experimentally. The wavefront control (WFC) algorithms, which rely on a linear model for optical aberrations induced by misalignment of the secondary lens and the segmented mirror, are tested and validated both on simulations and experimentally. In this proceeding, we present the performance of the full active optic control loop in presence of perturbations on the segmented mirror, and we detail the quality of the alignment correction.

  20. Airport Network Flow Simulator

    Science.gov (United States)

    1978-10-01

    The Airport Network Flow Simulator is a FORTRAN IV simulation of the flow of air traffic in the nation's 600 commercial airports. It calculates for any group of selected airports: (a) the landing and take-off (Type A) delays; and (b) the gate departu...

  1. Tower-based greenhouse gas measurement network design—The National Institute of Standards and Technology North East Corridor Testbed

    Science.gov (United States)

    Lopez-Coto, Israel; Ghosh, Subhomoy; Prasad, Kuldeep; Whetstone, James

    2017-09-01

    The North-East Corridor (NEC) Testbed project is the 3rd of three NIST (National Institute of Standards and Technology) greenhouse gas emissions testbeds designed to advance greenhouse gas measurements capabilities. A design approach for a dense observing network combined with atmospheric inversion methodologies is described. The Advanced Research Weather Research and Forecasting Model with the Stochastic Time-Inverted Lagrangian Transport model were used to derive the sensitivity of hypothetical observations to surface greenhouse gas emissions (footprints). Unlike other network design algorithms, an iterative selection algorithm, based on a k-means clustering method, was applied to minimize the similarities between the temporal response of each site and maximize sensitivity to the urban emissions contribution. Once a network was selected, a synthetic inversion Bayesian Kalman filter was used to evaluate observing system performance. We present the performances of various measurement network configurations consisting of differing numbers of towers and tower locations. Results show that an overly spatially compact network has decreased spatial coverage, as the spatial information added per site is then suboptimal as to cover the largest possible area, whilst networks dispersed too broadly lose capabilities of constraining flux uncertainties. In addition, we explore the possibility of using a very high density network of lower cost and performance sensors characterized by larger uncertainties and temporal drift. Analysis convergence is faster with a large number of observing locations, reducing the response time of the filter. Larger uncertainties in the observations implies lower values of uncertainty reduction. On the other hand, the drift is a bias in nature, which is added to the observations and, therefore, biasing the retrieved fluxes.

  2. Simulated Associating Polymer Networks

    Science.gov (United States)

    Billen, Joris

    Telechelic associating polymer networks consist of polymer chains terminated by endgroups that have a different chemical composition than the polymer backbone. When dissolved in a solution, the endgroups cluster together to form aggregates. At low temperature, a strongly connected reversible network is formed and the system behaves like a gel. Telechelic networks are of interest since they are representative for biopolymer networks (e.g. F-actin) and are widely used in medical applications (e.g. hydrogels for tissue engineering, wound dressings) and consumer products (e.g. contact lenses, paint thickeners). In this thesis such systems are studied by means of a molecular dynamics/Monte Carlo simulation. At first, the system in rest is studied by means of graph theory. The changes in network topology upon cooling to the gel state, are characterized. Hereto an extensive study of the eigenvalue spectrum of the gel network is performed. As a result, an in-depth investigation of the eigenvalue spectra for spatial ER, scale-free, and small-world networks is carried out. Next, the gel under the application of a constant shear is studied, with a focus on shear banding and the changes in topology under shear. Finally, the relation between the gel transition and percolation is discussed.

  3. Radiation beamline testbeds for the simulation of planetary and spacecraft environments for human and robotic mission risk assessment

    Science.gov (United States)

    Wilkins, Richard

    The Center for Radiation Engineering and Science for Space Exploration (CRESSE) at Prairie View A&M University, Prairie View, Texas, USA, is establishing an integrated, multi-disciplinary research program on the scientific and engineering challenges faced by NASA and the inter-national space community caused by space radiation. CRESSE focuses on space radiation research directly applicable to astronaut health and safety during future long term, deep space missions, including Martian, lunar, and other planetary body missions beyond low earth orbit. The research approach will consist of experimental and theoretical radiation modeling studies utilizing particle accelerator facilities including: 1. NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory; 2. Proton Synchrotron at Loma Linda University Med-ical Center; and 3. Los Alamos Neutron Science Center (LANSCE) at Los Alamos National Laboratory. Specifically, CRESSE investigators are designing, developing, and building experimental test beds that simulate the lunar and Martian radiation environments for experiments focused on risk assessment for astronauts and instrumentation. The testbeds have been designated the Bioastronautics Experimental Research Testbeds for Environmental Radiation Nostrum Investigations and Education (BERT and ERNIE). The designs of BERT and ERNIE will allow for a high degree of flexibility and adaptability to modify experimental configurations to simulate planetary surface environments, planetary habitats, and spacecraft interiors. In the nominal configuration, BERT and ERIE will consist of a set of experimental zones that will simulate the planetary atmosphere (Solid CO2 in the case of the Martian surface.), the planetary surface, and sub-surface regions. These experimental zones can be used for dosimetry, shielding, biological, and electronic effects radiation studies in support of space exploration missions. BERT and ERNIE are designed to be compatible with the

  4. Radiation Beamline Testbeds for the Simulation of Planetary and Spacecraft Environments for Human and Robotic Mission Risk Assessment

    Science.gov (United States)

    Wilkins, Richard

    2010-01-01

    The Center for Radiation Engineering and Science for Space Exploration (CRESSE) at Prairie View A&M University, Prairie View, Texas, USA, is establishing an integrated, multi-disciplinary research program on the scientific and engineering challenges faced by NASA and the international space community caused by space radiation. CRESSE focuses on space radiation research directly applicable to astronaut health and safety during future long term, deep space missions, including Martian, lunar, and other planetary body missions beyond low earth orbit. The research approach will consist of experimental and theoretical radiation modeling studies utilizing particle accelerator facilities including: 1. NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory; 2. Proton Synchrotron at Loma Linda University Medical Center; and 3. Los Alamos Neutron Science Center (LANSCE) at Los Alamos National Laboratory. Specifically, CRESSE investigators are designing, developing, and building experimental test beds that simulate the lunar and Martian radiation environments for experiments focused on risk assessment for astronauts and instrumentation. The testbeds have been designated the Bioastronautics Experimental Research Testbeds for Environmental Radiation Nostrum Investigations and Education (BERT and ERNIE). The designs of BERT and ERNIE will allow for a high degree of flexibility and adaptability to modify experimental configurations to simulate planetary surface environments, planetary habitats, and spacecraft interiors. In the nominal configuration, BERT and ERIE will consist of a set of experimental zones that will simulate the planetary atmosphere (Solid CO2 in the case of the Martian surface.), the planetary surface, and sub-surface regions. These experimental zones can be used for dosimetry, shielding, biological, and electronic effects radiation studies in support of space exploration missions. BERT and ERNIE are designed to be compatible with the

  5. Report of the Interagency Optical Network Testbeds Workshop 2 (ONT2)

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — ...Develop a common vision of the optical network technologies, services, infrastructure, and organizations needed to enable widespread use of optical networks...

  6. An information technology enabled sustainability test-bed (ITEST) for occupancy detection through an environmental sensing network

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Bing; Lam, Khee Poh; Zhang, Rui; Chiou, Yun-Shang [Center for Building Performance and Diagnostics, Carnegie Mellon University, Pittsburgh, PA 15213 (United States); Andrews, Burton; Hoeynck, Michael; Benitez, Diego [Research and Technology Center, Robert BOSCH LLC, Pittsburgh, PA 15212 (United States)

    2010-07-15

    This paper describes a large-scale wireless and wired environmental sensor network test-bed and its application to occupancy detection in an open-plan office building. Detection of occupant presence has been used extensively in built environments for applications such as demand-controlled ventilation and security; however, the ability to discern the actual number of people in a room is beyond the scope of current sensing techniques. To address this problem, a complex sensor network is deployed in the Robert L. Preger Intelligent Workplace comprising a wireless ambient-sensing system, a wired carbon dioxide sensing system, and a wired indoor air quality sensing system. A wired camera network is implemented as well for establishing true occupancy levels to be used as ground truth information for deriving algorithmic relationships with the environment conditions. To our knowledge, this extensive and diverse ambient-sensing infrastructure of the ITEST setup as well as the continuous data-collection capability is unprecedented. Final results indicate that there are significant correlations between measured environmental conditions and occupancy status. An average of 73% accuracy on the occupancy number detection was achieved by Hidden Markov Models during testing periods. This paper serves as an exploration to the research of ITEST for occupancy detection in offices. In addition, its utility extends to a wide variety of other building technology research areas such as human-centered environmental control, security, energy efficient and sustainable green buildings. (author)

  7. Use of Tabu Search in a Solver to Map Complex Networks onto Emulab Testbeds

    Science.gov (United States)

    2007-03-01

    large number of chemical interactions between genes and proteins. • Complex networks often display a non-uniform distribution of connectivity among...02-base.html. 15. Huffaker, Bradley, Evi Nemeth, and K Claffy. “Otter: A General-purpose Network Visualization Tool”. In Proceedings of INET 1999

  8. GNS3 network simulation guide

    CERN Document Server

    Welsh, Chris

    2013-01-01

    GNS3 Network Simulation Guide is an easy-to-follow yet comprehensive guide which is written in a tutorial format helping you grasp all the things you need for accomplishing your certification or simulation goal. If you are a networking professional who wants to learn how to simulate networks using GNS3, this book is ideal for you. The introductory examples within the book only require minimal networking knowledge, but as the book progresses onto more advanced topics, users will require knowledge of TCP/IP and routing.

  9. Future space-based direct imaging platforms: high fidelity simulations and instrument testbed development

    Science.gov (United States)

    Hicks, Brian A.; Eberhardt, Andrew; SAINT, VNC, LUVOIR

    2017-06-01

    The direct detection and characterization of habitable zone (HZ) Earth-like exoplanets is predicated on light gathering power of a large telescope operating with tens of millicarcsecond angular resolution, and at contrast scales on the order of 0.1 ppb. Accessing a statistically significant sample of planets to search for habitable worlds will likely build on the knowledge and insfrastructure gained through JWST, later advancing to assembly in space or formation flying approaches that may eventually be used to achieve even greater photometric sensitivity or resolution. in order to address contrast, a means of starlight suppression is needed that contends with complex aperture diffraction. The Visible Nulling Coronagraph (VNC) is one such approach that destructively interferes starlight to enable detection and characterization of extrasolar objects.The VNC is being incorporated into an end-to-end telescope-coronagraph system demonstrator called the Segmented Aperture Interferometric Nulling Testbed (SAINT). Development of the VNC has a rich legacy, and successfully demonstrating its capability with SAINT will mark milestones towards meeting the high-contrast direct imaging needs of future large space telescopes. SAINT merges the VNC with an actively-controlled segmented aperture telescope via a fine pointing system and aims to demonstrate 1e-8 contrast nulling of a segmented aperture at an inner working angle of four diffraction radii over a 20 nm visible bandpass. The system comprises four detectors for wavefront sensing, one of which is the high-contrast focal plane. The detectors provide feedback to control the segmented telescope primary mirror, a fast steering mirror, a segmented deformable mirror, and a delay stage. All of these components must work in concert with passive optical elements that are designed, fabricated, and aligned pairwise to achieve the requisite wavefront symmetry needed to push the state of the art in broadband destructive interferometric

  10. WING/WORLD: An Open Experimental Toolkit for the Design and Deployment of IEEE 802.11-Based Wireless Mesh Networks Testbeds

    Directory of Open Access Journals (Sweden)

    Daniele Miorandi

    2010-01-01

    Full Text Available Wireless Mesh Networks represent an interesting instance of light-infrastructure wireless networks. Due to their flexibility and resiliency to network failures, wireless mesh networks are particularly suitable for incremental and rapid deployments of wireless access networks in both metropolitan and rural areas. This paper illustrates the design and development of an open toolkit aimed at supporting the design of different solutions for wireless mesh networking by enabling real evaluation, validation, and demonstration. The resulting testbed is based on off-the-shelf hardware components and open-source software and is focused on IEEE 802.11 commodity devices. The software toolkit is based on an “open” philosophy and aims at providing the scientific community with a tool for effective and reproducible performance analysis of WMNs. The paper describes the architecture of the toolkit, and its core functionalities, as well as its potential evolutions.

  11. Inter-Vehicular Ad Hoc Networks: From the Ground Truth to Algorithm Design and Testbed Architecture

    Science.gov (United States)

    Giordano, Eugenio

    2011-01-01

    Many of the devices we interact with on a daily basis are currently equipped with wireless connectivity. Soon this will be extended to the vehicles we drive/ride every day. Wirelessly connected vehicles will form a new kind of network that will enable a wide set of innovative applications ranging from enhanced safety to entertainment. To…

  12. Data systems and computer science space data systems: Onboard networking and testbeds

    Science.gov (United States)

    Dalton, Dan

    1991-01-01

    The technical objectives are to develop high-performance, space-qualifiable, onboard computing, storage, and networking technologies. The topics are presented in viewgraph form and include the following: justification; technology challenges; program description; and state-of-the-art assessment.

  13. CyberChild - A simulation test-bed for consciousness studies

    DEFF Research Database (Denmark)

    Cotterill, Rodney M J

    2003-01-01

    . The simulated nervous system includes just two senses - hearing and touch - and it drives a set of muscles that serve vocalisation, feeding and bladder control. These functions were chosen because of their relevance to the earliest stages of human life, and the simulation has been given the name Cyber...... of CyberChild behaviour and from the monitoring of its ability to ontogenetically acquire novel reflexes. The author has suggested that this ability is the crucial evolutionary advantage of possessing consciousness. The project is still in its very early stages, and although no suggestion of consciousness...

  14. Adjustable Autonomy Testbed

    Science.gov (United States)

    Malin, Jane T.; Schrenkenghost, Debra K.

    2001-01-01

    The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.

  15. FY 2011 Second Quarter: Demonstration of New Aerosol Measurement Verification Testbed for Present-Day Global Aerosol Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Koch, D

    2011-03-20

    The regional-scale Weather Research and Forecasting (WRF) model is being used by a DOE Earth System Modeling (ESM) project titled “Improving the Characterization of Clouds, Aerosols and the Cryosphere in Climate Models” to evaluate the performance of atmospheric process modules that treat aerosols and aerosol radiative forcing in the Arctic. We are using a regional-scale modeling framework for three reasons: (1) It is easier to produce a useful comparison to observations with a high resolution model; (2) We can compare the behavior of the CAM parameterization suite with some of the more complex and computationally expensive parameterizations used in WRF; (3) we can explore the behavior of this parameterization suite at high resolution. Climate models like the Community Atmosphere Model version 5 (CAM5) being used within the Community Earth System Model (CESM) will not likely be run at mesoscale spatial resolutions (10–20 km) until 5–10 years from now. The performance of the current suite of physics modules in CAM5 at such resolutions is not known, and current computing resources do not permit high-resolution global simulations to be performed routinely. We are taking advantage of two tools recently developed under PNNL Laboratory Directed Research and Development (LDRD) projects for this activity. The first is the Aerosol Modeling Testbed (Fast et al., 2011b), a new computational framework designed to streamline the process of testing and evaluating aerosol process modules over a range of spatial and temporal scales. The second is the CAM5 suite of physics parameterizations that have been ported into WRF so that their performance and scale dependency can be quantified at mesoscale spatial resolutions (Gustafson et al., 2010; with more publications in preparation).

  16. Cross-Network Information Dissemination in Vehicular Ad hoc Networks (VANETs: Experimental Results from a Smartphone-Based Testbed

    Directory of Open Access Journals (Sweden)

    Gianluigi Ferrari

    2013-08-01

    Full Text Available In this work, we present an innovative approach for effective cross-network information dissemination, with applications to vehicular ad hoc networks (VANETs. The proposed approach, denoted as "Cross-Network Effective Traffic Alert Dissemination" (X-NETAD, leverages on the spontaneous formation of local WiFi (IEEE 802.11b VANETs, with direct connections between neighboring vehicles, in order to disseminate, very quickly and inexpensively, traffic alerts received from the cellular network. The proposed communication architecture has been implemented on Android smartphones. The obtained experimental results show that an effective cross-network information dissemination service can entirely rely on smartphone-based communications. This paves the way to future Internet architectures, where vehicles will play a key role as information destinations and sources.

  17. Cross-Network Information Dissemination in Vehicular Ad hoc Networks (VANETs): Experimental Results from a Smartphone-Based Testbed

    National Research Council Canada - National Science Library

    Stefano Busanelli; Filippo Rebecchi; Marco Picone; Nicola Iotti; Gianluigi Ferrari

    2013-01-01

    .... The proposed communication architecture has been implemented on Android smartphones. The obtained experimental results show that an effective cross-network information dissemination service can entirely rely on smartphone-based communications...

  18. DARPA Quantum Network Testbed

    Science.gov (United States)

    2007-07-01

    BB84 running at a wavelength suitable for installed telco fibers (1550 nm). QKD Detector 1 Team An opto-electronic suite that detects frames of...running at a wavelength suitable for installed telco fibers (1550 nm) to match the weak-coherent QKD source. QKD Protocols 1 Team A full implementation...installed telco fibers (1550 nm). QKD Switch 3 Team A passive optical switch suitable for establishing, maintaining, and tearing down virtual circuits

  19. Embedded Data Processor and Portable Computer Technology testbeds

    Science.gov (United States)

    Alena, Richard; Liu, Yuan-Kwei; Goforth, Andre; Fernquist, Alan R.

    1993-01-01

    Attention is given to current activities in the Embedded Data Processor and Portable Computer Technology testbed configurations that are part of the Advanced Data Systems Architectures Testbed at the Information Sciences Division at NASA Ames Research Center. The Embedded Data Processor Testbed evaluates advanced microprocessors for potential use in mission and payload applications within the Space Station Freedom Program. The Portable Computer Technology (PCT) Testbed integrates and demonstrates advanced portable computing devices and data system architectures. The PCT Testbed uses both commercial and custom-developed devices to demonstrate the feasibility of functional expansion and networking for portable computers in flight missions.

  20. Fast Physics Testbed for the FASTER Project

    Energy Technology Data Exchange (ETDEWEB)

    Lin, W.; Liu, Y.; Hogan, R.; Neggers, R.; Jensen, M.; Fridlind, A.; Lin, Y.; Wolf, A.

    2010-03-15

    This poster describes the Fast Physics Testbed for the new FAst-physics System Testbed and Research (FASTER) project. The overall objective is to provide a convenient and comprehensive platform for fast turn-around model evaluation against ARM observations and to facilitate development of parameterizations for cloud-related fast processes represented in global climate models. The testbed features three major components: a single column model (SCM) testbed, an NWP-Testbed, and high-resolution modeling (HRM). The web-based SCM-Testbed features multiple SCMs from major climate modeling centers and aims to maximize the potential of SCM approach to enhance and accelerate the evaluation and improvement of fast physics parameterizations through continuous evaluation of existing and evolving models against historical as well as new/improved ARM and other complementary measurements. The NWP-Testbed aims to capitalize on the large pool of operational numerical weather prediction products. Continuous evaluations of NWP forecasts against observations at ARM sites are carried out to systematically identify the biases and skills of physical parameterizations under all weather conditions. The highresolution modeling (HRM) activities aim to simulate the fast processes at high resolution to aid in the understanding of the fast processes and their parameterizations. A four-tier HRM framework is established to augment the SCM- and NWP-Testbeds towards eventual improvement of the parameterizations.

  1. Experiments Program for NASA's Space Communications Testbed

    Science.gov (United States)

    Chelmins, David; Reinhart, Richard

    2012-01-01

    NASA developed a testbed for communications and navigation that was launched to the International Space Station in 2012. The testbed promotes new software defined radio (SDR) technologies and addresses associated operational concepts for space-based SDRs, enabled by this first flight of NASA's Space Telecommunications Radio System (STRS) architecture standard. The experiments program consists of a mix of in-house and external experiments from partners in industry, academia, and government. The experiments will investigate key challenges in communications, networking, and global positioning system navigation both on the ground and on orbit. This presentation will discuss some of the key opportunities and challenges for the testbed experiments program.

  2. CoSimulating Communication Networks and Electrical System for Performance Evaluation in Smart Grid

    Directory of Open Access Journals (Sweden)

    Hwantae Kim

    2018-01-01

    Full Text Available In smart grid research domain, simulation study is the first choice, since the analytic complexity is too high and constructing a testbed is very expensive. However, since communication infrastructure and the power grid are tightly coupled with each other in the smart grid, a well-defined combination of simulation tools for the systems is required for the simulation study. Therefore, in this paper, we propose a cosimulation work called OOCoSim, which consists of OPNET (network simulation tool and OpenDSS (power system simulation tool. By employing the simulation tool, an organic and dynamic cosimulation can be realized since both simulators operate on the same computing platform and provide external interfaces through which the simulation can be managed dynamically. In this paper, we provide OOCoSim design principles including a synchronization scheme and detailed descriptions of its implementation. To present the effectiveness of OOCoSim, we define a smart grid application model and conduct a simulation study to see the impact of the defined application and the underlying network system on the distribution system. The simulation results show that the proposed OOCoSim can successfully simulate the integrated scenario of the power and network systems and produce the accurate effects of the networked control in the smart grid.

  3. Introduction to Network Simulator NS2

    CERN Document Server

    Issariyakul, Teerawat

    2008-01-01

    A beginners' guide for network simulator NS2, an open-source discrete event simulator designed mainly for networking research. It presents two fundamental NS2 concepts: how objects are assembled to create a network and how a packet flows from one object to another

  4. Trace Replay and Network Simulation Tool

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-22

    TraceR Is a trace replay tool built upon the ROSS-based CODES simulation framework. TraceR can be used for predicting network performance and understanding network behavior by simulating messaging In High Performance Computing applications on interconnection networks.

  5. Terrestrial Plume Impingement Testbed Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Masten Space Systems proposes to create a terrestrial plume impingement testbed for generating novel datasets for extraterrestrial robotic missions. This testbed...

  6. The DataTAG transatlantic testbed

    CERN Document Server

    Martin, O; Martin-Flatin, J P; Moroni, P; Nae, D; Newman, H; Ravot, S

    2005-01-01

    Wide area network testbeds allow researchers and engineers to test out new equipment, protocols and services in real-life situations, without jeopardizing the stability and reliability of production networks. The Data TransAtlantic Grid (DataTAG) testbed, deployed in 2002 between CERN, Geneva, Switzerland and StarLight, Chicago, IL, USA, is probably the largest testbed built to date. Jointly managed by CERN and Caltech, it is funded by the European Commission, the U.S. Department of Energy and the U.S. National Science Foundation. The main objectives of this testbed are to improve the Grid community's understanding of the networking issues posed by data- intensive Grid applications over transoceanic gigabit networks, design and develop new Grid middleware services, and improve the interoperability of European and U.S. Grid applications in High- Energy and Nuclear Physics. In this paper, we give an overview of this testbed, describe its various topologies over time, and summarize the main lessons learned after...

  7. SDL-based network performance simulation

    Science.gov (United States)

    Yang, Yang; Lu, Yang; Lin, Xiaokang

    2005-11-01

    Specification and description language (SDL) is an object-oriented formal language defined as a standard by ITU-T. Though SDL is mainly used in describing communication protocols, it is an efficient way to simulate the network performance with SDL tools according to our experience. This paper presents our methodology of SDL-based network performance simulation in such aspects as the simulation platform, the simulation modes and the integrated simulation environment. Note that Telelogic Tau 4.3 SDL suite is used here as the simulation environment though our methodology isn't limited to the software. Finally the SDL-based open shortest path first (OSPF) performance simulation in the wireless private network is illustrated as an example of our methodology, which indicates that SDL is indeed an efficient language in the area of the network performance simulation.

  8. Introduction to Network Simulator NS2

    CERN Document Server

    Issariyakul, Teerawat

    2012-01-01

    "Introduction to Network Simulator NS2" is a primer providing materials for NS2 beginners, whether students, professors, or researchers for understanding the architecture of Network Simulator 2 (NS2) and for incorporating simulation modules into NS2. The authors discuss the simulation architecture and the key components of NS2 including simulation-related objects, network objects, packet-related objects, and helper objects. The NS2 modules included within are nodes, links, SimpleLink objects, packets, agents, and applications. Further, the book covers three helper modules: timers, ra

  9. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  10. Hierarchical Network Design Using Simulated Annealing

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Clausen, Jens

    2002-01-01

    networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub...

  11. Lessons from Korea Telecom's VoIP testbed

    Science.gov (United States)

    Lee, Ki Jong; Yang, Junhwan; Kim, Dongkweon

    2001-07-01

    This paper describers the results and lessons from the voice over IP trial service on the Korea Telecom's VoIP Testbed. The testbed was made up of four different vendors' systems and solutions constituted four separate zones. Even though the backbone network of the testbed was not commercial IP network, we could comprehend some engineering parameters essential to packetized voice QoS. And we got some know-how. These kinds of results will be much help to traditional telco confronted with many difficult issues especially on packet voice network.

  12. Vectorized algorithms for spiking neural network simulation.

    Science.gov (United States)

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  13. A user oriented active network simulator

    Science.gov (United States)

    Rao, K. S.; Swamy, M. N. S.

    1980-07-01

    A digital computer simulator for the frequency response and tolerance analysis of an electrical network comprising RLCM elements, ideal operational amplifiers and controlled sources is presented in this tutorial paper. The simulator is based on 'tableau approach'. Reordering of the sparse tableau matrix is done using Markowitz Criterion and the diagonal pivots are chosen for simplicity. The simulator also employs dynamic allocation for maximum utilization of memory and faster turn around time. Three networks are simulated and their results are presented in this paper. A network in which the operational amplifiers are assumed to have single pole behaviour is also analyzed.

  14. Design, Development and Pre-Flight Testing of the Communications, Navigation, and Networking Reconfigurable Testbed (Connect) to Investigate Software Defined Radio Architecture on the International Space Station

    Science.gov (United States)

    Over, Ann P.; Barrett, Michael J.; Reinhart, Richard C.; Free, James M.; Cikanek, Harry A., III

    2011-01-01

    The Communication Navigation and Networking Reconfigurable Testbed (CoNNeCT) is a NASA-sponsored mission, which will investigate the usage of Software Defined Radios (SDRs) as a multi-function communication system for space missions. A softwaredefined radio system is a communication system in which typical components of the system (e.g., modulators) are incorporated into software. The software-defined capability allows flexibility and experimentation in different modulation, coding and other parameters to understand their effects on performance. This flexibility builds inherent redundancy and flexibility into the system for improved operational efficiency, real-time changes to space missions and enhanced reliability/redundancy. The CoNNeCT Project is a collaboration between industrial radio providers and NASA. The industrial radio providers are providing the SDRs and NASA is designing, building and testing the entire flight system. The flight system will be integrated on the Express Logistics Carrier (ELC) on the International Space Station (ISS) after launch on the H-IIB Transfer Vehicle in 2012. This paper provides an overview of the technology research objectives, payload description, design challenges and pre-flight testing results.

  15. Independent Peer Review of Communications, Navigation, and Networking re-Configurable Testbed (CoNNeCT) Project Antenna Pointing Subsystem (APS) Integrated Gimbal Assembly (IGA) Structural Analysis

    Science.gov (United States)

    Raju, Ivatury S.; Larsen, Curtis E.; Pellicciotti, Joseph W.

    2010-01-01

    Glenn Research Center Chief Engineer's Office requested an independent review of the structural analysis and modeling of the Communications, Navigation, and Networking re-Configurable Testbed (CoNNeCT) Project Antenna Pointing Subsystem (APS) Integrated Gimbal Assembly (IGA) to be conducted by the NASA Engineering and Safety Center (NESC). At this time, the IGA had completed its critical design review (CDR). The assessment was to be a peer review of the NEi-NASTRAN1 model of the APS Antenna, and not a peer review of the design and the analysis that had been completed by the GRC team for CDR. Thus, only a limited amount of information was provided on the structural analysis. However, the NESC team had difficulty separating analysis concerns from modeling issues. The team studied the NASTRAN model, but did not fully investigate how the model was used by the CoNNeCT Project and how the Project was interpreting the results. The team's findings, observations, and NESC recommendations are contained in this report.

  16. Multi-agent testbed for distributed space systems

    NARCIS (Netherlands)

    Osuman, A.; Guo, J.; Gill, E.K.A.

    2010-01-01

    Several industries are involved in the development of distributed systems, testbeds are needed to simulate the real world challenges that face the distributed systems. Presently, there are a number of testbeds in the world with very distinctive characteristics. Delft University of Technology is

  17. Design and deployment of an elastic network test-bed in IHEP data center based on SDN

    Science.gov (United States)

    Zeng, Shan; Qi, Fazhi; Chen, Gang

    2017-10-01

    High energy physics experiments produce huge amounts of raw data, while because of the sharing characteristics of the network resources, there is no guarantee of the available bandwidth for each experiment which may cause link congestion problems. On the other side, with the development of cloud computing technologies, IHEP have established a cloud platform based on OpenStack which can ensure the flexibility of the computing and storage resources, and more and more computing applications have been deployed on virtual machines established by OpenStack. However, under the traditional network architecture, network capability can’t be required elastically, which becomes the bottleneck of restricting the flexible application of cloud computing. In order to solve the above problems, we propose an elastic cloud data center network architecture based on SDN, and we also design a high performance controller cluster based on OpenDaylight. In the end, we present our current test results.

  18. Program Aids Simulation Of Neural Networks

    Science.gov (United States)

    Baffes, Paul T.

    1990-01-01

    Computer program NETS - Tool for Development and Evaluation of Neural Networks - provides simulation of neural-network algorithms plus software environment for development of such algorithms. Enables user to customize patterns of connections between layers of network, and provides features for saving weight values of network, providing for more precise control over learning process. Consists of translating problem into format using input/output pairs, designing network configuration for problem, and finally training network with input/output pairs until acceptable error reached. Written in C.

  19. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

    Science.gov (United States)

    Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

    2010-01-01

    Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

  20. Buffer Management Simulation in ATM Networks

    Science.gov (United States)

    Yaprak, E.; Xiao, Y.; Chronopoulos, A.; Chow, E.; Anneberg, L.

    1998-01-01

    This paper presents a simulation of a new dynamic buffer allocation management scheme in ATM networks. To achieve this objective, an algorithm that detects congestion and updates the dynamic buffer allocation scheme was developed for the OPNET simulation package via the creation of a new ATM module.

  1. Network simulations of optical illusions

    Science.gov (United States)

    Shinbrot, Troy; Lazo, Miguel Vivar; Siu, Theo

    We examine a dynamical network model of visual processing that reproduces several aspects of a well-known optical illusion, including subtle dependencies on curvature and scale. The model uses a genetic algorithm to construct the percept of an image, and we show that this percept evolves dynamically so as to produce the illusions reported. We find that the perceived illusions are hardwired into the model architecture and we propose that this approach may serve as an archetype to distinguish behaviors that are due to nature (i.e. a fixed network architecture) from those subject to nurture (that can be plastically altered through learning).

  2. Network Simulation of Technical Architecture

    National Research Council Canada - National Science Library

    Cave, William

    1998-01-01

    ..., and development of the Army Battle Command System (ABCS). PSI delivered a hierarchical iconic modeling facility that can be used to structure and restructure both models and scenarios, interactively, while simulations are running...

  3. The Airport Network Flow Simulator.

    Science.gov (United States)

    1976-05-01

    The impact of investment at an individual airport is felt through-out the National Airport System by reduction of delays at other airports in the the system. A GPSS model was constructed to simulate the propagation of delays through a nine-airport sy...

  4. Comparison of ONIX simulation results with experimental data from the BATMAN testbed for the study of negative ion extraction

    Science.gov (United States)

    Mochalskyy, Serhiy; Fantz, Ursel; Wünderlich, Dirk; Minea, Tiberiu

    2016-10-01

    The development of negative ion (NI) sources for the ITER neutral beam injector is strongly accompanied by modelling activities. The ONIX (Orsay Negative Ion eXtraction) code simulates the formation and extraction of negative hydrogen ions and co-extracted electrons produced in caesiated sources. In this paper the 3D geometry of the BATMAN extraction system, and the source characteristics such as the extraction and bias potential, and the 3D magnetic field were integrated in the model. Calculations were performed using plasma parameters experimentally obtained on BATMAN. The comparison of the ONIX calculated extracted NI density with the experimental results suggests that predictive calculations of the extraction of NIs are possible. The results show that for an ideal status of the Cs conditioning the extracted hydrogen NI current density could reach ~30 mA cm-2 at 10 kV and ~20 mA cm-2 at 5 kV extraction potential, with an electron/NI current density ratio of about 1, as measured in the experiments under the same plasma and source conditions. The dependency of the extracted NI current on the NI density in the bulk plasma region from both the modeling and the experiment was investigated. The separate distributions composing the NI beam originating from the plasma bulk region and the PG surface are presented for different NI plasma volume densities and NI emission rates from the plasma grid (PG) wall, respectively. The extracted current from the NIs produced at the Cs covered PG surface, initially moving towards the bulk plasma and then being bent towards the extraction surfaces, is lower compared to the extracted NI current from directly extracted surface produced ions.

  5. Dynamic simulation of regulatory networks using SQUAD

    Directory of Open Access Journals (Sweden)

    Xenarios Ioannis

    2007-11-01

    Full Text Available Abstract Background The ambition of most molecular biologists is the understanding of the intricate network of molecular interactions that control biological systems. As scientists uncover the components and the connectivity of these networks, it becomes possible to study their dynamical behavior as a whole and discover what is the specific role of each of their components. Since the behavior of a network is by no means intuitive, it becomes necessary to use computational models to understand its behavior and to be able to make predictions about it. Unfortunately, most current computational models describe small networks due to the scarcity of kinetic data available. To overcome this problem, we previously published a methodology to convert a signaling network into a dynamical system, even in the total absence of kinetic information. In this paper we present a software implementation of such methodology. Results We developed SQUAD, a software for the dynamic simulation of signaling networks using the standardized qualitative dynamical systems approach. SQUAD converts the network into a discrete dynamical system, and it uses a binary decision diagram algorithm to identify all the steady states of the system. Then, the software creates a continuous dynamical system and localizes its steady states which are located near the steady states of the discrete system. The software permits to make simulations on the continuous system, allowing for the modification of several parameters. Importantly, SQUAD includes a framework for perturbing networks in a manner similar to what is performed in experimental laboratory protocols, for example by activating receptors or knocking out molecular components. Using this software we have been able to successfully reproduce the behavior of the regulatory network implicated in T-helper cell differentiation. Conclusion The simulation of regulatory networks aims at predicting the behavior of a whole system when subject

  6. Efficient simulation of a tandem Jackson network

    NARCIS (Netherlands)

    Kroese, Dirk; Nicola, V.F.

    2002-01-01

    The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds

  7. Parameter estimation in channel network flow simulation

    Directory of Open Access Journals (Sweden)

    Han Longxi

    2008-03-01

    Full Text Available Simulations of water flow in channel networks require estimated values of roughness for all the individual channel segments that make up a network. When the number of individual channel segments is large, the parameter calibration workload is substantial and a high level of uncertainty in estimated roughness cannot be avoided. In this study, all the individual channel segments are graded according to the factors determining the value of roughness. It is assumed that channel segments with the same grade have the same value of roughness. Based on observed hydrological data, an optimal model for roughness estimation is built. The procedure of solving the optimal problem using the optimal model is described. In a test of its efficacy, this estimation method was applied successfully in the simulation of tidal water flow in a large complicated channel network in the lower reach of the Yangtze River in China.

  8. Simulation of Stimuli-Responsive Polymer Networks

    Directory of Open Access Journals (Sweden)

    Thomas Gruhn

    2013-11-01

    Full Text Available The structure and material properties of polymer networks can depend sensitively on changes in the environment. There is a great deal of progress in the development of stimuli-responsive hydrogels for applications like sensors, self-repairing materials or actuators. Biocompatible, smart hydrogels can be used for applications, such as controlled drug delivery and release, or for artificial muscles. Numerical studies have been performed on different length scales and levels of details. Macroscopic theories that describe the network systems with the help of continuous fields are suited to study effects like the stimuli-induced deformation of hydrogels on large scales. In this article, we discuss various macroscopic approaches and describe, in more detail, our phase field model, which allows the calculation of the hydrogel dynamics with the help of a free energy that considers physical and chemical impacts. On a mesoscopic level, polymer systems can be modeled with the help of the self-consistent field theory, which includes the interactions, connectivity, and the entropy of the polymer chains, and does not depend on constitutive equations. We present our recent extension of the method that allows the study of the formation of nano domains in reversibly crosslinked block copolymer networks. Molecular simulations of polymer networks allow the investigation of the behavior of specific systems on a microscopic scale. As an example for microscopic modeling of stimuli sensitive polymer networks, we present our Monte Carlo simulations of a filament network system with crosslinkers.

  9. Realistic computer network simulation for network intrusion detection dataset generation

    Science.gov (United States)

    Payer, Garrett

    2015-05-01

    The KDD-99 Cup dataset is dead. While it can continue to be used as a toy example, the age of this dataset makes it all but useless for intrusion detection research and data mining. Many of the attacks used within the dataset are obsolete and do not reflect the features important for intrusion detection in today's networks. Creating a new dataset encompassing a large cross section of the attacks found on the Internet today could be useful, but would eventually fall to the same problem as the KDD-99 Cup; its usefulness would diminish after a period of time. To continue research into intrusion detection, the generation of new datasets needs to be as dynamic and as quick as the attacker. Simply examining existing network traffic and using domain experts such as intrusion analysts to label traffic is inefficient, expensive, and not scalable. The only viable methodology is simulation using technologies including virtualization, attack-toolsets such as Metasploit and Armitage, and sophisticated emulation of threat and user behavior. Simulating actual user behavior and network intrusion events dynamically not only allows researchers to vary scenarios quickly, but enables online testing of intrusion detection mechanisms by interacting with data as it is generated. As new threat behaviors are identified, they can be added to the simulation to make quicker determinations as to the effectiveness of existing and ongoing network intrusion technology, methodology and models.

  10. Simulating Autonomous Telecommunication Networks for Space Exploration

    Science.gov (United States)

    Segui, John S.; Jennings, Esther H.

    2008-01-01

    Currently, most interplanetary telecommunication systems require human intervention for command and control. However, considering the range from near Earth to deep space missions, combined with the increase in the number of nodes and advancements in processing capabilities, the benefits from communication autonomy will be immense. Likewise, greater mission science autonomy brings the need for unscheduled, unpredictable communication and network routing. While the terrestrial Internet protocols are highly developed their suitability for space exploration has been questioned. JPL has developed the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) tool to help characterize network designs and protocols. The results will allow future mission planners to better understand the trade offs of communication protocols. This paper discusses various issues with interplanetary network and simulation results of interplanetary networking protocols.

  11. Sending policies in dynamic wireless mesh using network coding

    DEFF Research Database (Denmark)

    Pandi, Sreekrishna; Fitzek, Frank; Pihl, Jeppe

    2015-01-01

    This paper demonstrates the quick prototyping capabilities of the Python-Kodo library for network coding based performance evaluation and investigates the problem of data redundancy in a network coded wireless mesh with opportunistic overhearing. By means of several wireless meshed architectures...... simulated on the constructed test-bed, the advantage of network coding over state of the art routing schemes and the challenges of this new technology are shown. By providing maximum control of the network coding parameters and the simulation environment to the user, the test-bed facilitates quick...... construction of simulation setups on top of it. The paper highlights the problem of redundant transmission of data by the overhearing nodes in a wireless mesh and by means of three simple simulation setups that are built on top of the test-bed, the paper provides a brief insight into the selection...

  12. Torpedo and countermeasures modelling in the Torpedo Defence System Testbed

    NARCIS (Netherlands)

    Benders, F.P.A.; Witberg, R.R.; H.J. Grootendorst, H.J.

    2002-01-01

    Several years ago, TNO-FEL started the development of the Torpedo Defence System Testbed (TDSTB) based on the TORpedo SIMulation (TORSIM) model and the Maritime Operations Simulation and Evaluation System (MOSES). MOSES provides the simulation and modelling environment for the evaluation and

  13. Modeling and Simulation Network Data Standards

    Science.gov (United States)

    2011-09-30

    12.1 Open Shortest Path First ( OSPF ) Protocol commonly used to find the shortest path between two nodes. User defined. 12.2 Border Gateway Protocol...Element Definition 12.7 Request for Comments – 1256 (RFC-1256) Router discovery protocol. 13.0 OSPF Sub-elements define OSPF parameters 13.1...resolution network analysis simulation tool OSPF open shortest path first OV operational view PEO-I Program Executive Office - Information

  14. Resilience Simulation for Water, Power & Road Networks

    Science.gov (United States)

    Clark, S. S.; Seager, T. P.; Chester, M.; Eisenberg, D. A.; Sweet, D.; Linkov, I.

    2014-12-01

    The increasing frequency, scale, and damages associated with recent catastrophic events has called for a shift in focus from evading losses through risk analysis to improving threat preparation, planning, absorption, recovery, and adaptation through resilience. However, neither underlying theory nor analytic tools have kept pace with resilience rhetoric. As a consequence, current approaches to engineering resilience analysis often conflate resilience and robustness or collapse into a deeper commitment to the risk analytic paradigm proven problematic in the first place. This research seeks a generalizable understanding of resilience that is applicable in multiple disciplinary contexts. We adopt a unique investigative perspective by coupling social and technical analysis with human subjects research to discover the adaptive actions, ideas and decisions that contribute to resilience in three socio-technical infrastructure systems: electric power, water, and roadways. Our research integrates physical models representing network objects with examination of the knowledge systems and social interactions revealed by human subjects making decisions in a simulated crisis environment. To ensure a diversity of contexts, we model electric power, water, roadway and knowledge networks for Phoenix AZ and Indianapolis IN. We synthesize this in a new computer-based Resilient Infrastructure Simulation Environment (RISE) to allow individuals, groups (including students) and experts to test different network design configurations and crisis response approaches. By observing simulated failures and best performances, we expect a generalizable understanding of resilience may emerge that yields a measureable understanding of the sensing, anticipating, adapting, and learning processes that are essential to resilient organizations.

  15. Simulation of developing human neuronal cell networks.

    Science.gov (United States)

    Lenk, Kerstin; Priwitzer, Barbara; Ylä-Outinen, Laura; Tietz, Lukas H B; Narkilahti, Susanna; Hyttinen, Jari A K

    2016-08-30

    Microelectrode array (MEA) is a widely used technique to study for example the functional properties of neuronal networks derived from human embryonic stem cells (hESC-NN). With hESC-NN, we can investigate the earliest developmental stages of neuronal network formation in the human brain. In this paper, we propose an in silico model of maturating hESC-NNs based on a phenomenological model called INEX. We focus on simulations of the development of bursts in hESC-NNs, which are the main feature of neuronal activation patterns. The model was developed with data from developing hESC-NN recordings on MEAs which showed increase in the neuronal activity during the investigated six measurement time points in the experimental and simulated data. Our simulations suggest that the maturation process of hESC-NN, resulting in the formation of bursts, can be explained by the development of synapses. Moreover, spike and burst rate both decreased at the last measurement time point suggesting a pruning of synapses as the weak ones are removed. To conclude, our model reflects the assumption that the interaction between excitatory and inhibitory neurons during the maturation of a neuronal network and the spontaneous emergence of bursts are due to increased connectivity caused by the forming of new synapses.

  16. Single link flexible beam testbed project. Thesis

    Science.gov (United States)

    Hughes, Declan

    1992-01-01

    This thesis describes the single link flexible beam testbed at the CLaMS laboratory in terms of its hardware, software, and linear model, and presents two controllers, each including a hub angle proportional-derivative (PD) feedback compensator and one augmented by a second static gain full state feedback loop, based upon a synthesized strictly positive real (SPR) output, that increases specific flexible mode pole damping ratios w.r.t the PD only case and hence reduces unwanted residual oscillation effects. Restricting full state feedback gains so as to produce a SPR open loop transfer function ensures that the associated compensator has an infinite gain margin and a phase margin of at least (-90, 90) degrees. Both experimental and simulation data are evaluated in order to compare some different observer performance when applied to the real testbed and to the linear model when uncompensated flexible modes are included.

  17. Spiking network simulation code for petascale computers

    Science.gov (United States)

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  18. Spiking network simulation code for petascale computers.

    Science.gov (United States)

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.

  19. C Library for Simulated Evolution of Biological Networks

    OpenAIRE

    Chandran, Deepak; Sauro, Herbert M.

    2010-01-01

    Simulated evolution of biological networks can be used to generate functional networks as well as investigate hypotheses regarding natural evolution. A handful of studies have shown how simulated evolution can be used for studying the functional space spanned by biochemical networks, studying natural evolution, or designing new synthetic networks. If there was a method for easily performing such studies, it can allow the community to further experiment with simulated evolution and explore all...

  20. Motorway Network Simulation Using Bluetooth Data

    Directory of Open Access Journals (Sweden)

    Karakikes Ioannis

    2016-09-01

    Full Text Available This paper describes a systematic calibration process of a Vissim model, based on data derived from BT detectors. It also provides instructions how to calibrate and validate a highway network model based upon a case study and establishes an example for practitioners that are interested in designing highway networks with micro simulation tools. Within this case study, a 94,5 % proper calibration to all segments was achieved First, an overview of the systematic calibration approach that will be followed is presented. A description of the given datasets follows. Finally, model’s systematic calibration and validation based on BT data from segments under free flow conditions is thoroughly explained. The delivered calibrated Vissim model acts as a test bed, which in combination with other analysis tools can be used for potential future exploitation regarding transportation related purposes.

  1. Learning in innovation networks: Some simulation experiments

    Science.gov (United States)

    Gilbert, Nigel; Ahrweiler, Petra; Pyka, Andreas

    2007-05-01

    According to the organizational learning literature, the greatest competitive advantage a firm has is its ability to learn. In this paper, a framework for modeling learning competence in firms is presented to improve the understanding of managing innovation. Firms with different knowledge stocks attempt to improve their economic performance by engaging in radical or incremental innovation activities and through partnerships and networking with other firms. In trying to vary and/or to stabilize their knowledge stocks by organizational learning, they attempt to adapt to environmental requirements while the market strongly selects on the results. The simulation experiments show the impact of different learning activities, underlining the importance of innovation and learning.

  2. Mobile-ip Aeronautical Network Simulation Study

    Science.gov (United States)

    Ivancic, William D.; Tran, Diepchi T.

    2001-01-01

    NASA is interested in applying mobile Internet protocol (mobile-ip) technologies to its space and aeronautics programs. In particular, mobile-ip will play a major role in the Advanced Aeronautic Transportation Technology (AATT), the Weather Information Communication (WINCOMM), and the Small Aircraft Transportation System (SATS) aeronautics programs. This report presents the results of a simulation study of mobile-ip for an aeronautical network. The study was performed to determine the performance of the transmission control protocol (TCP) in a mobile-ip environment and to gain an understanding of how long delays, handoffs, and noisy channels affect mobile-ip performance.

  3. A novel GPON-based transmission hierarchy for metropolitan-area network

    Science.gov (United States)

    Wang, Wei; Zhang, Yongjun; Cao, Chang; Cao, Yang; Chen, Rui; Ma, Zheng; Zhao, Yongli; Gu, Wanyi

    2010-12-01

    In this paper, we propose a GPON-based transmission hierarchy (GTH) for metro network. Then a simulation testbed is constructed to study the GTH's performance on supporting wireless backhaul service. Simulation results show that, compared with MSTP, GTH reduces the cost of network expansion significantly for its higher transmission efficiency.

  4. Cyber-Physical Test Platform for Microgrids: Combining Hardware, Hardware-in-the-Loop, and Network-Simulator-in-the-Loop

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, Austin; Chakraborty, Sudipta; Wang, Dexin; Singh, Pawan; Cui, Qiang; Yang, Liuqing; Suryanarayanan, Siddharth

    2016-11-14

    This paper presents a cyber-physical testbed, developed to investigate the complex interactions between emerging microgrid technologies such as grid-interactive power sources, control systems, and a wide variety of communication platforms and bandwidths. The cyber-physical testbed consists of three major components for testing and validation: real time models of a distribution feeder model with microgrid assets that are integrated into the National Renewable Energy Laboratory's (NREL) power hardware-in-the-loop (PHIL) platform; real-time capable network-simulator-in-the-loop (NSIL) models; and physical hardware including inverters and a simple system controller. Several load profiles and microgrid configurations were tested to examine the effect on system performance with increasing channel delays and router processing delays in the network simulator. Testing demonstrated that the controller's ability to maintain a target grid import power band was severely diminished with increasing network delays and laid the foundation for future testing of more complex cyber-physical systems.

  5. The design of a network emulation and simulation laboratory

    CSIR Research Space (South Africa)

    Von Solms, S

    2015-07-01

    Full Text Available The development of the Network Emulation and Simulation Laboratory is motivated by the drive to contribute to the enhancement of the security and resilience of South Africa's critical information infrastructure. The goal of the Network Emulation...

  6. The design and implementation of a network simulation platform

    CSIR Research Space (South Africa)

    Von Solms, S

    2013-11-01

    Full Text Available of the NS. A discussion on the various aspects of the NS is discussed subsequently. A. Topology It can be seen from Figure 1 that the developed NS comprises of multiple network sections, namely Internal User Networks/Local Area Networks (LANs) connected...]. This will provide a realistic platform which is isolated, more controlled and more predictable than implementation across live networks [4]. In this paper we discuss the development of such a network simulation environment, called a network simulator (NS...

  7. Characterization of Background Traffic in Hybrid Network Simulation

    National Research Council Canada - National Science Library

    Lauwens, Ben; Scheers, Bart; Van de Capelle, Antoine

    2006-01-01

    .... Two approaches are common: discrete event simulation and fluid approximation. A discrete event simulation generates a huge amount of events for a full-blown battlefield communication network resulting in a very long runtime...

  8. Creating real network with expected degree distribution: A statistical simulation

    OpenAIRE

    WenJun Zhang; GuangHua Liu

    2012-01-01

    The degree distribution of known networks is one of the focuses in network analysis. However, its inverse problem, i.e., to create network from known degree distribution has not yet been reported. In present study, a statistical simulation algorithm was developed to create real network with expected degree distribution. It is aniteration procedure in which a real network, with the least deviation of actual degree distribution to expected degree distribution, was created. Random assignment was...

  9. Information diversity in structure and dynamics of simulated neuronal networks.

    Science.gov (United States)

    Mäki-Marttunen, Tuomo; Aćimović, Jugoslava; Nykter, Matti; Kesseli, Juha; Ruohonen, Keijo; Yli-Harja, Olli; Linne, Marja-Leena

    2011-01-01

    Neuronal networks exhibit a wide diversity of structures, which contributes to the diversity of the dynamics therein. The presented work applies an information theoretic framework to simultaneously analyze structure and dynamics in neuronal networks. Information diversity within the structure and dynamics of a neuronal network is studied using the normalized compression distance. To describe the structure, a scheme for generating distance-dependent networks with identical in-degree distribution but variable strength of dependence on distance is presented. The resulting network structure classes possess differing path length and clustering coefficient distributions. In parallel, comparable realistic neuronal networks are generated with NETMORPH simulator and similar analysis is done on them. To describe the dynamics, network spike trains are simulated using different network structures and their bursting behaviors are analyzed. For the simulation of the network activity the Izhikevich model of spiking neurons is used together with the Tsodyks model of dynamical synapses. We show that the structure of the simulated neuronal networks affects the spontaneous bursting activity when measured with bursting frequency and a set of intraburst measures: the more locally connected networks produce more and longer bursts than the more random networks. The information diversity of the structure of a network is greatest in the most locally connected networks, smallest in random networks, and somewhere in between in the networks between order and disorder. As for the dynamics, the most locally connected networks and some of the in-between networks produce the most complex intraburst spike trains. The same result also holds for sparser of the two considered network densities in the case of full spike trains.

  10. Information Diversity in Structure and Dynamics of Simulated Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Tuomo eMäki-Marttunen

    2011-06-01

    Full Text Available Neuronal networks exhibit a wide diversity of structures, which contributes to the diversity of the dynamics therein. The presented work applies an information theoretic framework to simultaneously analyze structure and dynamics in neuronal networks. Information diversity within the structure and dynamics of a neuronal network is studied using the normalized compression distance (NCD. To describe the structure, a scheme for generating distance-dependent networks with identical in-degree distribution but variable strength of dependence on distance is presented. The resulting network structure classes possess differing path length and clustering coefficient distributions. In parallel, comparable realistic neuronal networks are generated with NETMORPH simulator and similar analysis is done on them. To describe the dynamics, network spike trains are simulated using different network structures and their bursting behaviours are analyzed. For the simulation of the network activity the Izhikevich model of spiking neurons is used together with the Tsodyks model of dynamical synapses.We show that the structure of the simulated neuronal networks affects the spontaneous bursting activity when measured with bursting frequency and a set of intraburst measures: the more locally connected networks produce more and longer bursts than the more random networks. The information diversity of the structure of a network is greatest in the most locally connected networks, smallest in random networks, and somewhere in between in the networks between order and disorder. As for the dynamics, the most locally connected networks and some of the in-between networks produce the most complex intraburst spike trains. The same result also holds for sparser of the two considered network densities in the case of full spike trains.

  11. Simulation Of Networking Protocols On Software Emulated Network Stack

    Directory of Open Access Journals (Sweden)

    Hrushikesh Nimkar

    2015-08-01

    Full Text Available With the increasing number and complexity of network based applications the need to easy configuration development and integration of network applications has taken a high precedence. Trivial activities such as configuration can be carried out efficiently if network services are software based rather than hardware based. Project aims at enabling the network engineers to easily include network functionalities into hisher configuration and define hisher own network stack without using the kernel network stack. Having thought of this we have implemented two functionalities UPNP and MDNS. The multicast Domain Name System MDNS resolves host names to IP addresses within small ad-hoc networks and without having need of special DNS server and its configuration. MDNS application provides every host with functionality to register itself to the router make a multicast DNS request and its resolution. To make adding network devices and networked programs to a network as easy as it is to plug in a piece of hardware into a PC we make use of UPnP. The devices and programs find out about the network setup and other networked devices and programs through discovery and advertisements of services and configure themselves accordingly. UPNP application provides every host with functionality of discovering services of other hosts and serving requests on demand. To implement these applications we have used snabbswitch framework which an open source virtualized ethernet networking stack.

  12. Simulation and Evaluation of Ethernet Passive Optical Network

    Directory of Open Access Journals (Sweden)

    Salah A. Jaro Alabady

    2013-05-01

    Full Text Available      This paper studies simulation and evaluation of Ethernet Passive Optical Network (EPON system, IEEE802.3ah based OPTISM 3.6 simulation program. The simulation program is used in this paper to build a typical ethernet passive optical network, and to evaluate the network performance when using the (1580, 1625 nm wavelength instead of (1310, 1490 nm that used in Optical Line Terminal (OLT and Optical Network Units (ONU's in system architecture of Ethernet passive optical network at different bit rate and different fiber optic length. The results showed enhancement in network performance by increase the number of nodes (subscribers connected to the network, increase the transmission distance, reduces the received power and reduces the Bit Error Rate (BER.   

  13. Advanced Artificial Intelligence Technology Testbed

    Science.gov (United States)

    Anken, Craig S.

    1993-01-01

    The Advanced Artificial Intelligence Technology Testbed (AAITT) is a laboratory testbed for the design, analysis, integration, evaluation, and exercising of large-scale, complex, software systems, composed of both knowledge-based and conventional components. The AAITT assists its users in the following ways: configuring various problem-solving application suites; observing and measuring the behavior of these applications and the interactions between their constituent modules; gathering and analyzing statistics about the occurrence of key events; and flexibly and quickly altering the interaction of modules within the applications for further study.

  14. Integrated Testbed for Environmental Analysis of NextGen Concepts using ACES Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose the development of an analysis testbed to integrate simulation tools, such as ACES, with aviation environmental effects models, such as the Aviation...

  15. Integrated Testbed for Environmental Analysis of NextGen Concepts using ACES Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation in this effort is the development of an industrial-grade analysis testbed to integrate simulation tools, such as ACES, with aviation environmental...

  16. Simulation of a Lunar Surface Base Power Distribution Network for the Constellation Lunar Surface Systems

    Science.gov (United States)

    Mintz, Toby; Maslowski, Edward A.; Colozza, Anthony; McFarland, Willard; Prokopius, Kevin P.; George, Patrick J.; Hussey, Sam W.

    2010-01-01

    The Lunar Surface Power Distribution Network Study team worked to define, breadboard, build and test an electrical power distribution system consistent with NASA's goal of providing electrical power to sustain life and power equipment used to explore the lunar surface. A testbed was set up to simulate the connection of different power sources and loads together to form a mini-grid and gain an understanding of how the power systems would interact. Within the power distribution scheme, each power source contributes to the grid in an independent manner without communication among the power sources and without a master-slave scenario. The grid consisted of four separate power sources and the accompanying power conditioning equipment. Overall system design and testing was performed. The tests were performed to observe the output and interaction of the different power sources as some sources are added and others are removed from the grid connection. The loads on the system were also varied from no load to maximum load to observe the power source interactions.

  17. Wireless Sensor Networks for Environmental Monitoring

    Science.gov (United States)

    Liang, X.; Liang, Y.; Navarro, M.; Zhong, X.; Villalba, G.; Li, Y.; Davis, T.; Erratt, N.

    2015-12-01

    Wireless sensor networks (WSNs) have gained an increasing interest in a broad range of new scientific research and applications. WSN technologies can provide high resolution for spatial and temporal data which has not been possible before, opening up new opportunities. On the other hand, WSNs, particularly outdoor WSNs in harsh environments, present great challenges for scientists and engineers in terms of the network design, deployment, operation, management, and maintenance. Since 2010, we have been working on the deployment of an outdoor multi-hop WSN testbed for hydrological/environmental monitoring in a forested hill-sloped region at the Audubon Society of Western Pennsylvania (ASWP), Pennsylvania, USA. The ASWP WSN testbed has continuously evolved and had more than 80 nodes by now. To our knowledge, the ASWP WSN testbed represents one of the first known long-term multi-hop WSN deployments in an outdoor environment. As simulation and laboratory methods are unable to capture the complexity of outdoor environments (e.g., forests, oceans, mountains, or glaciers), which significantly affect WSN operations and maintenance, experimental deployments are essential to investigate and understand WSN behaviors and performances as well as its maintenance characteristics under these harsh conditions. In this talk, based on our empirical studies with the ASWP WSN testbed, we will present our discoveries and investigations on several important aspects including WSN energy profile, node reprogramming, network management system, and testbed maintenance. We will then provide our insight into these critical aspects of outdoor WSN deployments and operations.

  18. A Testbed to Evaluate the FIWARE-Based IoT Platform in the Domain of Precision Agriculture.

    Science.gov (United States)

    Martínez, Ramón; Pastor, Juan Ángel; Álvarez, Bárbara; Iborra, Andrés

    2016-11-23

    Wireless sensor networks (WSNs) represent one of the most promising technologies for precision farming. Over the next few years, a significant increase in the use of such systems on commercial farms is expected. WSNs present a number of problems, regarding scalability, interoperability, communications, connectivity with databases and data processing. Different Internet of Things middleware is appearing to overcome these challenges. This paper checks whether one of these middleware, FIWARE, is suitable for the development of agricultural applications. To the authors' knowledge, there are no works that show how to use FIWARE in precision agriculture and study its appropriateness, its scalability and its efficiency for this kind of applications. To do this, a testbed has been designed and implemented to simulate different deployments and load conditions. The testbed is a typical FIWARE application, complete, yet simple and comprehensible enough to show the main features and components of FIWARE, as well as the complexity of using this technology. Although the testbed has been deployed in a laboratory environment, its design is based on the analysis of an Internet of Things use case scenario in the domain of precision agriculture.

  19. A Testbed to Evaluate the FIWARE-Based IoT Platform in the Domain of Precision Agriculture

    Science.gov (United States)

    Martínez, Ramón; Pastor, Juan Ángel; Álvarez, Bárbara; Iborra, Andrés

    2016-01-01

    Wireless sensor networks (WSNs) represent one of the most promising technologies for precision farming. Over the next few years, a significant increase in the use of such systems on commercial farms is expected. WSNs present a number of problems, regarding scalability, interoperability, communications, connectivity with databases and data processing. Different Internet of Things middleware is appearing to overcome these challenges. This paper checks whether one of these middleware, FIWARE, is suitable for the development of agricultural applications. To the authors’ knowledge, there are no works that show how to use FIWARE in precision agriculture and study its appropriateness, its scalability and its efficiency for this kind of applications. To do this, a testbed has been designed and implemented to simulate different deployments and load conditions. The testbed is a typical FIWARE application, complete, yet simple and comprehensible enough to show the main features and components of FIWARE, as well as the complexity of using this technology. Although the testbed has been deployed in a laboratory environment, its design is based on the analysis of an Internet of Things use case scenario in the domain of precision agriculture. PMID:27886091

  20. Graphical user interface for wireless sensor networks simulator

    Science.gov (United States)

    Paczesny, Tomasz; Paczesny, Daniel; Weremczuk, Jerzy

    2008-01-01

    Wireless Sensor Networks (WSN) are currently very popular area of development. It can be suited in many applications form military through environment monitoring, healthcare, home automation and others. Those networks, when working in dynamic, ad-hoc model, need effective protocols which must differ from common computer networks algorithms. Research on those protocols would be difficult without simulation tool, because real applications often use many nodes and tests on such a big networks take much effort and costs. The paper presents Graphical User Interface (GUI) for simulator which is dedicated for WSN studies, especially in routing and data link protocols evaluation.

  1. A Flexible System for Simulating Aeronautical Telecommunication Network

    Science.gov (United States)

    Maly, Kurt; Overstreet, C. M.; Andey, R.

    1998-01-01

    At Old Dominion University, we have built Aeronautical Telecommunication Network (ATN) Simulator with NASA being the fund provider. It provides a means to evaluate the impact of modified router scheduling algorithms on the network efficiency, to perform capacity studies on various network topologies and to monitor and study various aspects of ATN through graphical user interface (GUI). In this paper we describe briefly about the proposed ATN model and our abstraction of this model. Later we describe our simulator architecture highlighting some of the design specifications, scheduling algorithms and user interface. At the end, we have provided the results of performance studies on this simulator.

  2. Parallel discrete-event simulation of FCFS stochastic queueing networks

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.

  3. A neural network simulation package in CLIPS

    Science.gov (United States)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  4. Experimental demonstration of an OpenFlow based software-defined optical network employing packet, fixed and flexible DWDM grid technologies on an international multi-domain testbed.

    Science.gov (United States)

    Channegowda, M; Nejabati, R; Rashidi Fard, M; Peng, S; Amaya, N; Zervas, G; Simeonidou, D; Vilalta, R; Casellas, R; Martínez, R; Muñoz, R; Liu, L; Tsuritani, T; Morita, I; Autenrieth, A; Elbers, J P; Kostecki, P; Kaczmarek, P

    2013-03-11

    Software defined networking (SDN) and flexible grid optical transport technology are two key technologies that allow network operators to customize their infrastructure based on application requirements and therefore minimizing the extra capital and operational costs required for hosting new applications. In this paper, for the first time we report on design, implementation & demonstration of a novel OpenFlow based SDN unified control plane allowing seamless operation across heterogeneous state-of-the-art optical and packet transport domains. We verify and experimentally evaluate OpenFlow protocol extensions for flexible DWDM grid transport technology along with its integration with fixed DWDM grid and layer-2 packet switching.

  5. Toward Designing a Quantum Key Distribution Network Simulation Model

    Directory of Open Access Journals (Sweden)

    Miralem Mehic

    2016-01-01

    Full Text Available As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several routing protocols in terms of the number of sent routing packets, goodput and Packet Delivery Ratio of data traffic flow using NS-3 simulator.

  6. Interfacing Network Simulations and Empirical Data

    Science.gov (United States)

    2009-05-01

    appropriate. The quadratic assignment procedure ( QAP ) (Krackhardt, 1987) could be used to compare the correlation between networks; however, the...Social roles and the evolution of networks in extreme and isolated environments. Mathematical Sociology, 27: 89-121. Krackhardt, D. (1987). QAP

  7. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  8. A GIS Tool for simulating Nitrogen transport along schematic Network

    Science.gov (United States)

    Tavakoly, A. A.; Maidment, D. R.; Yang, Z.; Whiteaker, T.; David, C. H.; Johnson, S.

    2012-12-01

    An automated method called the Arc Hydro Schematic Processor has been developed for water process computation on schematic networks formed from the NHDPlus and similar GIS river networks. The sechemtaic network represents the hydrologic feature on the ground and is a network of links and nodes. SchemaNodes show hydrologic features, such as catchments or stream junctions. SchemaLinks prescripe the connections between nodes. The schematic processor uses the schematic network to pass informatin through a watershed and move water or pollutants dwonstream. In addition, the schematic processor has a capability to use additional programming applied to the passed and/or received values and manipulating data trough network. This paper describes how the schemtic processor can be used to simulate nitrogen transport and transformation on river networks. For this purpose the nitrogen loads is estimated on the NHDPlus river network using the Schematic Processor coupled with the river routing model for the Texas Gulf Coast Hydrologic Region.

  9. WDM Systems and Networks Modeling, Simulation, Design and Engineering

    CERN Document Server

    Ellinas, Georgios; Roudas, Ioannis

    2012-01-01

    WDM Systems and Networks: Modeling, Simulation, Design and Engineering provides readers with the basic skills, concepts, and design techniques used to begin design and engineering of optical communication systems and networks at various layers. The latest semi-analytical system simulation techniques are applied to optical WDM systems and networks, and a review of the various current areas of optical communications is presented. Simulation is mixed with experimental verification and engineering to present the industry as well as state-of-the-art research. This contributed volume is divided into three parts, accommodating different readers interested in various types of networks and applications. The first part of the book presents modeling approaches and simulation tools mainly for the physical layer including transmission effects, devices, subsystems, and systems), whereas the second part features more engineering/design issues for various types of optical systems including ULH, access, and in-building system...

  10. Experimental Research Testbeds for Large-Scale WSNs: A Survey from the Architectural Perspective

    OpenAIRE

    Hiecheol Kim; Won-Kee Hong; Joonhyuk Yoo; Seong-eun Yoo

    2015-01-01

    Wireless sensor networks (WSNs) have a significant potential in diverse applications. In contrast to WSNs in a small-scale setting, the real-world adoption of large-scale WSNs is quite slow particularly due to the lack of robustness of protocols at all levels. Upon the demanding need for their experimental verification and evaluation, researchers have developed numerous WSN testbeds. While each individual WSN testbed contributes to the progress with its own unique innovation, still a missing ...

  11. Simulated evolution of signal transduction networks.

    Directory of Open Access Journals (Sweden)

    Mohammad Mobashir

    Full Text Available Signal transduction is the process of routing information inside cells when receiving stimuli from their environment that modulate the behavior and function. In such biological processes, the receptors, after receiving the corresponding signals, activate a number of biomolecules which eventually transduce the signal to the nucleus. The main objective of our work is to develop a theoretical approach which will help to better understand the behavior of signal transduction networks due to changes in kinetic parameters and network topology. By using an evolutionary algorithm, we designed a mathematical model which performs basic signaling tasks similar to the signaling process of living cells. We use a simple dynamical model of signaling networks of interacting proteins and their complexes. We study the evolution of signaling networks described by mass-action kinetics. The fitness of the networks is determined by the number of signals detected out of a series of signals with varying strength. The mutations include changes in the reaction rate and network topology. We found that stronger interactions and addition of new nodes lead to improved evolved responses. The strength of the signal does not play any role in determining the response type. This model will help to understand the dynamic behavior of the proteins involved in signaling pathways. It will also help to understand the robustness of the kinetics of the output response upon changes in the rate of reactions and the topology of the network.

  12. Modified network simulation model with token method of bus access

    Directory of Open Access Journals (Sweden)

    L.V. Stribulevich

    2013-08-01

    Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.

  13. Experimental study of the stress level at the workplace using an smart testbed of wireless sensor networks and ambient intelligence techniques

    OpenAIRE

    Silva, Fábio; Olivares, Teresa; Royo, Fernando; Vergara, M. A.; Analide, César

    2013-01-01

    "Natural and artificial computation in engineering and medical applications : 5th International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2013, Mallorca, Spain, June 10-14, 2013. Proceedings, Part II", ISBN 978-364238621-3 This paper combines techniques of ambient intelligence and wireless sensor networks with the objective of obtain important conclusions to increase the quality of life of people. In particular, we oriented our study to the stress ...

  14. Neural networks analysis on SSME vibration simulation data

    Science.gov (United States)

    Lo, Ching F.; Wu, Kewei

    1993-01-01

    The neural networks method is applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME to supplement the statistical method utilized in the prototype system. The investigation of neural networks analysis is conducted using SSME vibration data from a NASA developed numerical simulator. The limited application of neural networks to the HPFTP has also shown the effectiveness in diagnosing the anomalies of turbopump vibrations.

  15. EVALUATING AUSTRALIAN FOOTBALL LEAGUE PLAYER CONTRIBUTIONS USING INTERACTIVE NETWORK SIMULATION

    Directory of Open Access Journals (Sweden)

    Jonathan Sargent

    2013-03-01

    Full Text Available This paper focuses on the contribution of Australian Football League (AFL players to their team's on-field network by simulating player interactions within a chosen team list and estimating the net effect on final score margin. A Visual Basic computer program was written, firstly, to isolate the effective interactions between players from a particular team in all 2011 season matches and, secondly, to generate a symmetric interaction matrix for each match. Negative binomial distributions were fitted to each player pairing in the Geelong Football Club for the 2011 season, enabling an interactive match simulation model given the 22 chosen players. Dynamic player ratings were calculated from the simulated network using eigenvector centrality, a method that recognises and rewards interactions with more prominent players in the team network. The centrality ratings were recorded after every network simulation and then applied in final score margin predictions so that each player's match contribution-and, hence, an optimal team-could be estimated. The paper ultimately demonstrates that the presence of highly rated players, such as Geelong's Jimmy Bartel, provides the most utility within a simulated team network. It is anticipated that these findings will facilitate optimal AFL team selection and player substitutions, which are key areas of interest to coaches. Network simulations are also attractive for use within betting markets, specifically to provide information on the likelihood of a chosen AFL team list "covering the line".

  16. High-contrast imaging testbed

    Energy Technology Data Exchange (ETDEWEB)

    Baker, K; Silva, D; Poyneer, L; Macintosh, B; Bauman, B; Palmer, D; Remington, T; Delgadillo-Lariz, M

    2008-01-23

    Several high-contrast imaging systems are currently under construction to enable the detection of extra-solar planets. In order for these systems to achieve their objectives, however, there is considerable developmental work and testing which must take place. Given the need to perform these tests, a spatially-filtered Shack-Hartmann adaptive optics system has been assembled to evaluate new algorithms and hardware configurations which will be implemented in these future high-contrast imaging systems. In this article, construction and phase measurements of a membrane 'woofer' mirror are presented. In addition, results from closed-loop operation of the assembled testbed with static phase plates are presented. The testbed is currently being upgraded to enable operation at speeds approaching 500 hz and to enable studies of the interactions between the woofer and tweeter deformable mirrors.

  17. Evaluating Australian football league player contributions using interactive network simulation.

    Science.gov (United States)

    Sargent, Jonathan; Bedford, Anthony

    2013-01-01

    This paper focuses on the contribution of Australian Football League (AFL) players to their team's on-field network by simulating player interactions within a chosen team list and estimating the net effect on final score margin. A Visual Basic computer program was written, firstly, to isolate the effective interactions between players from a particular team in all 2011 season matches and, secondly, to generate a symmetric interaction matrix for each match. Negative binomial distributions were fitted to each player pairing in the Geelong Football Club for the 2011 season, enabling an interactive match simulation model given the 22 chosen players. Dynamic player ratings were calculated from the simulated network using eigenvector centrality, a method that recognises and rewards interactions with more prominent players in the team network. The centrality ratings were recorded after every network simulation and then applied in final score margin predictions so that each player's match contribution-and, hence, an optimal team-could be estimated. The paper ultimately demonstrates that the presence of highly rated players, such as Geelong's Jimmy Bartel, provides the most utility within a simulated team network. It is anticipated that these findings will facilitate optimal AFL team selection and player substitutions, which are key areas of interest to coaches. Network simulations are also attractive for use within betting markets, specifically to provide information on the likelihood of a chosen AFL team list "covering the line ". Key pointsA simulated interaction matrix for Australian Rules football players is proposedThe simulations were carried out by fitting unique negative binomial distributions to each player pairing in a sideEigenvector centrality was calculated for each player in a simulated matrix, then for the teamThe team centrality measure adequately predicted the team's winning marginA player's net effect on margin could hence be estimated by replacing him in

  18. Slow update stochastic simulation algorithms for modeling complex biochemical networks.

    Science.gov (United States)

    Ghosh, Debraj; De, Rajat K

    2017-10-30

    The stochastic simulation algorithm (SSA) based modeling is a well recognized approach to predict the stochastic behavior of biological networks. The stochastic simulation of large complex biochemical networks is a challenge as it takes a large amount of time for simulation due to high update cost. In order to reduce the propensity update cost, we proposed two algorithms: slow update exact stochastic simulation algorithm (SUESSA) and slow update exact sorting stochastic simulation algorithm (SUESSSA). We applied cache-based linear search (CBLS) in these two algorithms for improving the search operation for finding reactions to be executed. Data structure used for incorporating CBLS is very simple and the cost of maintaining this during propensity update operation is very low. Hence, time taken during propensity updates, for simulating strongly coupled networks, is very fast; which leads to reduction of total simulation time. SUESSA and SUESSSA are not only restricted to elementary reactions, they support higher order reactions too. We used linear chain model and colloidal aggregation model to perform a comparative analysis of the performances of our methods with the existing algorithms. We also compared the performances of our methods with the existing ones, for large biochemical networks including B cell receptor and FcϵRI signaling networks. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. PyNN: A Common Interface for Neuronal Network Simulators

    Science.gov (United States)

    Davison, Andrew P.; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre

    2008-01-01

    Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN. PMID:19194529

  20. PyNN: a common interface for neuronal network simulators

    Directory of Open Access Journals (Sweden)

    Andrew P Davison

    2009-01-01

    Full Text Available Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware. PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization, and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.

  1. Developed hydraulic simulation model for water pipeline networks

    Directory of Open Access Journals (Sweden)

    A. Ayad

    2013-03-01

    Full Text Available A numerical method that uses linear graph theory is presented for both steady state, and extended period simulation in a pipe network including its hydraulic components (pumps, valves, junctions, etc.. The developed model is based on the Extended Linear Graph Theory (ELGT technique. This technique is modified to include new network components such as flow control valves and tanks. The technique also expanded for extended period simulation (EPS. A newly modified method for the calculation of updated flows improving the convergence rate is being introduced. Both benchmarks, ad Actual networks are analyzed to check the reliability of the proposed method. The results reveal the finer performance of the proposed method.

  2. Importance of simulation tools for the planning of optical network

    Science.gov (United States)

    Martins, Indayara B.; Martins, Yara; Rudge, Felipe; Moschimı, Edson

    2015-10-01

    The main proposal of this work is to show the importance of using simulation tools to project optical networks. The simulation method supports the investigation of several system and network parameters, such as bit error rate, blocking probability as well as physical layer issues, such as attenuation, dispersion, and nonlinearities, as these are all important to evaluate and validate the operability of optical networks. The work was divided into two parts: firstly, physical layer preplanning was proposed for the distribution of amplifiers and compensating for the attenuation and dispersion effects in span transmission; in this part, we also analyzed the quality of the transmitted signal. In the second part, an analysis of the transport layer was completed, proposing wavelength distribution planning, according to the total utilization of each link. The main network parameters used to evaluate the transport and physical layer design were delay (latency), blocking probability, and bit error rate (BER). This work was carried out with commercially available simulation tools.

  3. Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design

    Science.gov (United States)

    Ang, Chee Siang; Zaphiris, Panayiotis

    We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.

  4. Meeting the memory challenges of brain-scale network simulation

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2012-01-01

    Full Text Available The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10^5 neurons with up to 10^9 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are one or two orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been studied in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Bluegene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of a neuronal simulator as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place.

  5. Computational Aspects of Sensor Network Protocols (Distributed Sensor Network Simulator

    Directory of Open Access Journals (Sweden)

    Vasanth Iyer

    2009-08-01

    Full Text Available In this work, we model the sensor networks as an unsupervised learning and clustering process. We classify nodes according to its static distribution to form known class densities (CCPD. These densities are chosen from specific cross-layer features which maximizes lifetime of power-aware routing algorithms. To circumvent computational complexities of a power-ware communication STACK we introduce path-loss models at the nodes only for high density deployments. We study the cluster heads and formulate the data handling capacity for an expected deployment and use localized probability models to fuse the data with its side information before transmission. So each cluster head has a unique Pmax but not all cluster heads have the same measured value. In a lossless mode if there are no faults in the sensor network then we can show that the highest probability given by Pmax is ambiguous if its frequency is ≤ n/2 otherwise it can be determined by a local function. We further show that the event detection at the cluster heads can be modelled with a pattern 2m and m, the number of bits can be a correlated pattern of 2 bits and for a tight lower bound we use 3-bit Huffman codes which have entropy < 1. These local algorithms are further studied to optimize on power, fault detection and to maximize on the distributed routing algorithm used at the higher layers. From these bounds in large network, it is observed that the power dissipation is network size invariant. The performance of the routing algorithms solely based on success of finding healthy nodes in a large distribution. It is also observed that if the network size is kept constant and the density of the nodes is kept closer then the local pathloss model effects the performance of the routing algorithms. We also obtain the maximum intensity of transmitting nodes for a given category of routing algorithms for an outage constraint, i.e., the lifetime of sensor network.

  6. Power Aware Simulation Framework for Wireless Sensor Networks and Nodes

    Directory of Open Access Journals (Sweden)

    Daniel Weber

    2008-07-01

    Full Text Available The constrained resources of sensor nodes limit analytical techniques and cost-time factors limit test beds to study wireless sensor networks (WSNs. Consequently, simulation becomes an essential tool to evaluate such systems.We present the power aware wireless sensors (PAWiS simulation framework that supports design and simulation of wireless sensor networks and nodes. The framework emphasizes power consumption capturing and hence the identification of inefficiencies in various hardware and software modules of the systems. These modules include all layers of the communication system, the targeted class of application itself, the power supply and energy management, the central processing unit (CPU, and the sensor-actuator interface. The modular design makes it possible to simulate heterogeneous systems. PAWiS is an OMNeT++ based discrete event simulator written in C++. It captures the node internals (modules as well as the node surroundings (network, environment and provides specific features critical to WSNs like capturing power consumption at various levels of granularity, support for mobility, and environmental dynamics as well as the simulation of timing effects. A module library with standardized interfaces and a power analysis tool have been developed to support the design and analysis of simulation models. The performance of the PAWiS simulator is comparable with other simulation environments.

  7. A remote integrated testbed for cooperating objects

    CERN Document Server

    Dios, Jose Ramiro Martinez-de; Bernabe, Alberto de San; Ollero, Anibal

    2013-01-01

    Testbeds are gaining increasing relevance in research domains and also in industrial applications. However, very few books devoted to testbeds have been published. To the best of my knowledge no book on this topic has been published. This book is particularly interesting for the growing community of testbed developers. I believe the book is also very interesting for researchers in robot-WSN cooperation.This book provides detailed description of a system that can be considered the first testbed that allows full peer-to-peer interoperability between heterogeneous robots and ubiquitous systems su

  8. Simulating public private networks as evolving systems

    NARCIS (Netherlands)

    Deljoo, A.; Janssen, M.F.W.H.A.; Klievink, A.J.

    2013-01-01

    Public-private service networks (PPSN) consist of social and technology components. Development of PPSN is ill-understood as these are dependent on a complex mix of interactions among stakeholders and their technologies and is influenced by contemporary developments. The aim of this paper is to

  9. Adaptive Importance Sampling Simulation of Queueing Networks

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Nicola, V.F.; Rubinstein, N.; Rubinstein, Reuven Y.

    2000-01-01

    In this paper, a method is presented for the efficient estimation of rare-event (overflow) probabilities in Jackson queueing networks using importance sampling. The method differs in two ways from methods discussed in most earlier literature: the change of measure is state-dependent, i.e., it is a

  10. Queueing networks : Rare events and fast simulations

    NARCIS (Netherlands)

    Miretskiy, D.I.

    2009-01-01

    This monograph focuses on rare events. Even though they are extremely unlikely, they can still occur and then could have significant consequences. We mainly consider rare events in queueing networks. More precisely, we are interested in the probability of collecting some large number of jobs in the

  11. Simulating activation propagation in social networks using the graph theory

    Directory of Open Access Journals (Sweden)

    František Dařena

    2010-01-01

    Full Text Available The social-network formation and analysis is nowadays one of objects that are in a focus of intensive research. The objective of the paper is to suggest the perspective of representing social networks as graphs, with the application of the graph theory to problems connected with studying the network-like structures and to study spreading activation algorithm for reasons of analyzing these structures. The paper presents the process of modeling multidimensional networks by means of directed graphs with several characteristics. The paper also demonstrates using Spreading Activation algorithm as a good method for analyzing multidimensional network with the main focus on recommender systems. The experiments showed that the choice of parameters of the algorithm is crucial, that some kind of constraint should be included and that the algorithm is able to provide a stable environment for simulations with networks.

  12. SiGNet: A signaling network data simulator to enable signaling network inference.

    Directory of Open Access Journals (Sweden)

    Elizabeth A Coker

    Full Text Available Network models are widely used to describe complex signaling systems. Cellular wiring varies in different cellular contexts and numerous inference techniques have been developed to infer the structure of a network from experimental data of the network's behavior. To objectively identify which inference strategy is best suited to a specific network, a gold standard network and dataset are required. However, suitable datasets for benchmarking are difficult to find. Numerous tools exist that can simulate data for transcriptional networks, but these are of limited use for the study of signaling networks. Here, we describe SiGNet (Signal Generator for Networks: a Cytoscape app that simulates experimental data for a signaling network of known structure. SiGNet has been developed and tested against published experimental data, incorporating information on network architecture, and the directionality and strength of interactions to create biological data in silico. SiGNet is the first tool to simulate biological signaling data, enabling an accurate and systematic assessment of inference strategies. SiGNet can also be used to produce preliminary models of key biological pathways following perturbation.

  13. Integrated Circuit For Simulation Of Neural Network

    Science.gov (United States)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.; Khanna, Satish K.

    1988-01-01

    Ballast resistors deposited on top of circuit structure. Cascadable, programmable binary connection matrix fabricated in VLSI form as basic building block for assembly of like units into content-addressable electronic memory matrices operating somewhat like networks of neurons. Connections formed during storage of data, and data recalled from memory by prompting matrix with approximate or partly erroneous signals. Redundancy in pattern of connections causes matrix to respond with correct stored data.

  14. Software for Brain Network Simulations: A Comparative Study

    Science.gov (United States)

    Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.

    2017-01-01

    Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with

  15. Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.

    Science.gov (United States)

    Wang, Zhijun; Mirdamadi, Reza; Wang, Qing

    2016-01-01

    Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.

  16. NASA's telemedicine testbeds: Commercial benefit

    Science.gov (United States)

    Doarn, Charles R.; Whitten, Raymond

    1998-01-01

    The National Aeronautics and Space Administration (NASA) has been developing and applying telemedicine to support space flight since the Agency's beginning. Telemetry of physiological parameters from spacecraft to ground controllers is critical to assess the health status of humans in extreme and remote environments. Requisite systems to support medical care and maintain readiness will evolve as mission duration and complexity increase. Developing appropriate protocols and procedures to support multinational, multicultural missions is a key objective of this activity. NASA has created an Agency-wide strategic plan that focuses on the development and integration of technology into the health care delivery systems for space flight to meet these challenges. In order to evaluate technology and systems that can enhance inflight medical care and medical education, NASA has established and conducted several testbeds. Additionally, in June of 1997, NASA established a Commercial Space Center (CSC) for Medical Informatics and Technology Applications at Yale University School of Medicine. These testbeds and the CSC foster the leveraging of technology and resources between government, academia and industry to enhance health care. This commercial endeavor will influence both the delivery of health care in space and on the ground. To date, NASA's activities in telemedicine have provided new ideas in the application of telecommunications and information systems to health care. NASA's Spacebridge to Russia, an Internet-based telemedicine testbed, is one example of how telemedicine and medical education can be conducted using the Internet and its associated tools. Other NASA activities, including the development of a portable telemedicine workstation, which has been demonstrated on the Crow Indian Reservation and in the Texas Prison System, show promise in serving as significant adjuncts to the delivery of health care. As NASA continues to meet the challenges of space flight, the

  17. HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks

    Directory of Open Access Journals (Sweden)

    Luca Marchetti

    2017-01-01

    Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.

  18. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System...... Identification, Prediction, Simulation and Control of a dynamic, non-linear and noisy process. Further, the difficulties to control a practical non-linear laboratory process in a satisfactory way by using a traditional controller are overcomed by using a trained neural network to perform non-linear System......The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...

  19. Network bursts in cortical neuronal cultures: 'noise - versus pacemaker'- driven neural network simulations

    NARCIS (Netherlands)

    Gritsun, T.; Stegenga, J.; le Feber, Jakob; Rutten, Wim

    2009-01-01

    In this paper we address the issue of spontaneous bursting activity in cortical neuronal cultures and explain what might cause this collective behavior using computer simulations of two different neural network models. While the common approach to acivate a passive network is done by introducing

  20. High Fidelity Simulations of Large-Scale Wireless Networks

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Benz, Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  1. Stochastic Simulation of Biomolecular Networks in Dynamic Environments.

    Science.gov (United States)

    Voliotis, Margaritis; Thomas, Philipp; Grima, Ramon; Bowsher, Clive G

    2016-06-01

    Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate-using decision-making by a large population of quorum sensing bacteria-that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits.

  2. Stochastic Simulation of Biomolecular Networks in Dynamic Environments.

    Directory of Open Access Journals (Sweden)

    Margaritis Voliotis

    2016-06-01

    Full Text Available Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate-using decision-making by a large population of quorum sensing bacteria-that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits.

  3. SELANSI: a toolbox for Simulation of Stochastic Gene Regulatory Networks.

    Science.gov (United States)

    Pájaro, Manuel; Otero-Muras, Irene; Vázquez, Carlos; Alonso, Antonio A

    2017-10-11

    Gene regulation is inherently stochastic. In many applications concerning Systems and Synthetic Biology such as the reverse engineering and the de novo design of genetic circuits, stochastic effects (yet potentially crucial) are often neglected due to the high computational cost of stochastic simulations. With advances in these fields there is an increasing need of tools providing accurate approximations of the stochastic dynamics of gene regulatory networks (GRNs) with reduced computational effort. This work presents SELANSI (SEmi-LAgrangian SImulation of GRNs), a software toolbox for the simulation of stochastic multidimensional gene regulatory networks. SELANSI exploits intrinsic structural properties of gene regulatory networks to accurately approximate the corresponding chemical master equation (CME) with a partial integral differential equation (PIDE) that is solved by a semi-lagrangian method with high efficiency. Networks under consideration might involve multiple genes with self and cross regulations, in which genes can be regulated by different transcription factors. Moreover, the validity of the method is not restricted to a particular type of kinetics. The tool offers total flexibility regarding network topology, kinetics and parameterization, as well as simulation options. SELANSI runs under the MATLAB environment, and is available under GPLv3 license at https://sites.google.com/view/selansi. antonio@iim.csic.es.

  4. Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems.

    Energy Technology Data Exchange (ETDEWEB)

    Burton, David P.; Van Leeuwen, Brian P.; McDonald, Michael James; Onunkwo, Uzoma A.; Tarman, Thomas David; Urias, Vincent E.

    2009-09-01

    This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

  5. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Social Network Mixing Patterns In Mergers & Acquisitions - A Simulation Experiment

    Directory of Open Access Journals (Sweden)

    Robert Fabac

    2011-01-01

    Full Text Available In the contemporary world of global business and continuously growing competition, organizations tend to use mergers and acquisitions to enforce their position on the market. The future organization’s design is a critical success factor in such undertakings. The field of social network analysis can enhance our uderstanding of these processes as it lets us reason about the development of networks, regardless of their origin. The analysis of mixing patterns is particularly useful as it provides an insight into how nodes in a network connect with each other. We hypothesize that organizational networks with compatible mixing patterns will be integrated more successfully. After conducting a simulation experiment, we suggest an integration model based on the analysis of network assortativity. The model can be a guideline for organizational integration, such as occurs in mergers and acquisitions.

  7. In silico Biochemical Reaction Network Analysis (IBRENA): a package for simulation and analysis of reaction networks.

    Science.gov (United States)

    Liu, Gang; Neelamegham, Sriram

    2008-04-15

    We present In silico Biochemical Reaction Network Analysis (IBRENA), a software package which facilitates multiple functions including cellular reaction network simulation and sensitivity analysis (both forward and adjoint methods), coupled with principal component analysis, singular-value decomposition and model reduction. The software features a graphical user interface that aids simulation and plotting of in silico results. While the primary focus is to aid formulation, testing and reduction of theoretical biochemical reaction networks, the program can also be used for analysis of high-throughput genomic and proteomic data. The software package, manual and examples are available at http://www.eng.buffalo.edu/~neel/ibrena

  8. Aggregated Representation of Distribution Networks for Large-Scale Transmission Network Simulations

    DEFF Research Database (Denmark)

    Göksu, Ömer; Altin, Müfit; Sørensen, Poul Ejnar

    2014-01-01

    As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include the distri......As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include...... the distributed generation within those analysis. In this paper a practical methodology to obtain aggregated behaviour of the distributed generation is proposed. The methodology, which is based on the use of the IEC standard wind turbine models, is applied on a benchmark distribution network via simulations....

  9. Artificial Neural Network Metamodels of Stochastic Computer Simulations

    Science.gov (United States)

    1994-08-10

    23 Haddock, J. and O’Keefe, R., "Using Artificial Intelligence to Facilitate Manufacturing Systems Simulation," Computers & Industrial Engineering , Vol...Feedforward Neural Networks," Computers & Industrial Engineering , Vol. 21, No. 1- 4, (1991), pp. 247-251. 87 Proceedings of the 1992 Summer Computer...Using Simulation Experiments," Computers & Industrial Engineering , Vol. 22, No. 2 (1992), pp. 195-209. 119 Kuei, C. and Madu, C., "Polynomial

  10. ATLAS and CMS applications on the WorldGrid testbed

    CERN Document Server

    Ciaschini, V; Fanzago, F; Verlato, M; Vaccarossa, L; Donno, F; Garbellotto, V

    2003-01-01

    WorldGrid is an intercontinental testbed spanning Europe and the US integrating architecturally different Grid implementations based on the Globus toolkit. It has been developed in the context of the DataTAG and iVDGL projects, and successfully demonstrated during the WorldGrid demos at IST2002 (Copenhagen) and SC2002 (Baltimore). Two HEP experiments, ATLAS and CMS, successful exploited the WorldGrid testbed for executing jobs simulating the response of their detectors to physics eve nts produced by real collisions expected at the LHC accelerator starting from 2007. This data intensive activity has been run since many years on local dedicated computing farms consisting of hundreds of nodes and Terabytes of disk and tape storage. Within the WorldGrid testbed, for the first time HEP simulation jobs were submitted and run indifferently on US and European resources, despite of their underlying different Grid implementations, and produced data which could be retrieved and further analysed on the submitting machine...

  11. FDIR Validation Test-Bed Development and Results

    Science.gov (United States)

    Karlsson, Alexander; Sakthivel, Anandhavel; Aberg, Martin; Andersson, Jan; Habinc, Sandi; Dellandrea, Brice; Nodet, Jean-Christian; Guettache, Farid; Furano, Gianluca

    2015-09-01

    This paper describes work being performed by Cobham Gaisler and Thales Alenia Space France for the European Space Agency to develop an extension of the existing avionics system testbed facility in ESTEC's Avionics Lab. The work is funded by the European Space Agency under contract 4000109928/13/NL/AK. The resulting FDIR (Fault Detection, Isolation and Recovery) testbed will allow to test concepts, strategy mechanisms and tools related to FDIR. The resulting facility will have the capabilities to support nominal and off-nominal test cases and to support tools for post testing and post simulation analysis. Ultimately the purpose of the output of this activity is to provide a tool for assessment and validation at laboratory level. This paper describes an on-going development; at the time of writing the activity is in the validation phase.

  12. Development of a FDIR Validation Test-Bed

    Science.gov (United States)

    Andersson, Jan; Cederman, Daniel; Habinc, Sandi; Dellandrea, Brice; Nodet, Jean-Christian; Guettache, Farid; Furano, Gianluca

    2014-08-01

    This paper describes work being performed by Aeroflex Gaisler and Thales Alenia Space France for the European Space Agency to develop an extension of the existing avionics system testbed facility in ESTEC's Avionics Lab. The work is funded by the European Space Agency under contract 4000109928/13/NL/AK.The resulting FDIR (Fault Detection, Isolation and Recovery) testbed will allow to test concepts, strategy mechanisms and tools related to FDIR. The resulting facility will have the capabilities to support nominal and off-nominal test cases and to support tools for post testing and post simulation analysis. Ultimately the purpose of the output of this activity is to provide a tool for assessment and validation at laboratory level.This paper describes an on-going development; at the time of writing the activity is in the preliminary design phase.

  13. Amplitude variations on the Extreme Adaptive Optics testbed

    Energy Technology Data Exchange (ETDEWEB)

    Evans, J; Thomas, S; Dillon, D; Gavel, D; Phillion, D; Macintosh, B

    2007-08-14

    High-contrast adaptive optics systems, such as those needed to image extrasolar planets, are known to require excellent wavefront control and diffraction suppression. At the Laboratory for Adaptive Optics on the Extreme Adaptive Optics testbed, we have already demonstrated wavefront control of better than 1 nm rms within controllable spatial frequencies. Corresponding contrast measurements, however, are limited by amplitude variations, including those introduced by the micro-electrical-mechanical-systems (MEMS) deformable mirror. Results from experimental measurements and wave optic simulations of amplitude variations on the ExAO testbed are presented. We find systematic intensity variations of about 2% rms, and intensity variations with the MEMS to be 6%. Some errors are introduced by phase and amplitude mixing because the MEMS is not conjugate to the pupil, but independent measurements of MEMS reflectivity suggest that some error is introduced by small non-uniformities in the reflectivity.

  14. Transmission network expansion planning with simulation optimization

    Energy Technology Data Exchange (ETDEWEB)

    Bent, Russell W [Los Alamos National Laboratory; Berscheid, Alan [Los Alamos National Laboratory; Toole, G. Loren [Los Alamos National Laboratory

    2010-01-01

    Within the electric power literatW''e the transmi ssion expansion planning problem (TNEP) refers to the problem of how to upgrade an electric power network to meet future demands. As this problem is a complex, non-linear, and non-convex optimization problem, researchers have traditionally focused on approximate models. Often, their approaches are tightly coupled to the approximation choice. Until recently, these approximations have produced results that are straight-forward to adapt to the more complex (real) problem. However, the power grid is evolving towards a state where the adaptations are no longer easy (i.e. large amounts of limited control, renewable generation) that necessitates new optimization techniques. In this paper, we propose a generalization of the powerful Limited Discrepancy Search (LDS) that encapsulates the complexity in a black box that may be queJied for information about the quality of a proposed expansion. This allows the development of a new optimization algOlitlun that is independent of the underlying power model.

  15. Improving a Computer Networks Course Using the Partov Simulation Engine

    Science.gov (United States)

    Momeni, B.; Kharrazi, M.

    2012-01-01

    Computer networks courses are hard to teach as there are many details in the protocols and techniques involved that are difficult to grasp. Employing programming assignments as part of the course helps students to obtain a better understanding and gain further insight into the theoretical lectures. In this paper, the Partov simulation engine and…

  16. A Neural Network Model for Dynamics Simulation | Bholoa ...

    African Journals Online (AJOL)

    University of Mauritius Research Journal. Journal Home · ABOUT · Advanced Search · Current Issue · Archives · Journal Home > Vol 15, No 1 (2009) >. Log in or Register to get access to full text downloads. Username, Password, Remember me, or Register. A Neural Network Model for Dynamics Simulation. Ajeevsing ...

  17. Fracture Network Modeling and GoldSim Simulation Support

    OpenAIRE

    杉田 健一郎; Dershowiz, W.

    2003-01-01

    During Heisei-14, Golder Associates provided support for JNC Tokai through data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aspo Task Force on Modelling of Groundwater Flow and Transport, and analysis of repository safety assessment technologies including cell networks for evaluation of the disturbed rock zone (DRZ) and total systems performance assessment (TSPA).

  18. Distributed Sensor Network Software Development Testing through Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Brennan, Sean M. [Univ. of New Mexico, Albuquerque, NM (United States)

    2003-12-01

    The distributed sensor network (DSN) presents a novel and highly complex computing platform with dif culties and opportunities that are just beginning to be explored. The potential of sensor networks extends from monitoring for threat reduction, to conducting instant and remote inventories, to ecological surveys. Developing and testing for robust and scalable applications is currently practiced almost exclusively in hardware. The Distributed Sensors Simulator (DSS) is an infrastructure that allows the user to debug and test software for DSNs independent of hardware constraints. The exibility of DSS allows developers and researchers to investigate topological, phenomenological, networking, robustness and scaling issues, to explore arbitrary algorithms for distributed sensors, and to defeat those algorithms through simulated failure. The user speci es the topology, the environment, the application, and any number of arbitrary failures; DSS provides the virtual environmental embedding.

  19. Simulation of Attacks for Security in Wireless Sensor Network.

    Science.gov (United States)

    Diaz, Alvaro; Sanchez, Pablo

    2016-11-18

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.

  20. Simulation of Two High Pressure Distribution Network Operation in one-Network Connection

    Directory of Open Access Journals (Sweden)

    Perju Sorin

    2014-09-01

    Full Text Available The programs developed by the water supply system operators in view of metering the branches and reducing the potable water losses from the distribution network pipes lead to the performance reassessment of these networks. As a result the energetic consumption of the pumping stations should meet the accepted limits. An essential role in the evaluation of the operation parameters of the network performance is played by hydraulic modeling, by means of which the network performance simulation can be done in different scenarios. The present article describes the concept of two high-pressure network coupling. These networks are supplied by two repumping stations, in which the water flows were drastically reduced due to the present situation

  1. Brian: a simulator for spiking neural networks in Python

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2008-11-01

    Full Text Available Brian is a new simulator for spiking neural networks, written in Python (http://brian.di.ens.fr. It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  2. Mounted Smartphones as Measurement and Control Platforms for Motor-Based Laboratory Test-Beds

    Directory of Open Access Journals (Sweden)

    Jared A. Frank

    2016-08-01

    Full Text Available Laboratory education in science and engineering often entails the use of test-beds equipped with costly peripherals for sensing, acquisition, storage, processing, and control of physical behavior. However, costly peripherals are no longer necessary to obtain precise measurements and achieve stable feedback control of test-beds. With smartphones performing diverse sensing and processing tasks, this study examines the feasibility of mounting smartphones directly to test-beds to exploit their embedded hardware and software in the measurement and control of the test-beds. This approach is a first step towards replacing laboratory-grade peripherals with more compact and affordable smartphone-based platforms, whose interactive user interfaces can engender wider participation and engagement from learners. Demonstrative cases are presented in which the sensing, computation, control, and user interaction with three motor-based test-beds are handled by a mounted smartphone. Results of experiments and simulations are used to validate the feasibility of mounted smartphones as measurement and feedback control platforms for motor-based laboratory test-beds, report the measurement precision and closed-loop performance achieved with such platforms, and address challenges in the development of platforms to maintain system stability.

  3. Mounted Smartphones as Measurement and Control Platforms for Motor-Based Laboratory Test-Beds

    Science.gov (United States)

    Frank, Jared A.; Brill, Anthony; Kapila, Vikram

    2016-01-01

    Laboratory education in science and engineering often entails the use of test-beds equipped with costly peripherals for sensing, acquisition, storage, processing, and control of physical behavior. However, costly peripherals are no longer necessary to obtain precise measurements and achieve stable feedback control of test-beds. With smartphones performing diverse sensing and processing tasks, this study examines the feasibility of mounting smartphones directly to test-beds to exploit their embedded hardware and software in the measurement and control of the test-beds. This approach is a first step towards replacing laboratory-grade peripherals with more compact and affordable smartphone-based platforms, whose interactive user interfaces can engender wider participation and engagement from learners. Demonstrative cases are presented in which the sensing, computation, control, and user interaction with three motor-based test-beds are handled by a mounted smartphone. Results of experiments and simulations are used to validate the feasibility of mounted smartphones as measurement and feedback control platforms for motor-based laboratory test-beds, report the measurement precision and closed-loop performance achieved with such platforms, and address challenges in the development of platforms to maintain system stability. PMID:27556464

  4. Socialising Health Burden Through Different Network Topologies: A Simulation Study.

    Science.gov (United States)

    Peacock, Adrian; Cheung, Anthony; Kim, Peter; Poon, Simon K

    2017-01-01

    An aging population and the expectation of premium quality health services combined with the increasing economic burden of the healthcare system requires a paradigm shift toward patient oriented healthcare. The guardian angel theory described by Szolovits [1] explores the notion of enlisting patients as primary providers of information and motivation to patients with similar clinical history through social connections. In this study, an agent based model was developed to simulate to explore how individuals are affected through their levels of intrinsic positivity. Ring, point-to-point (paired buddy), and random networks were modelled, with individuals able to send messages to each other given their levels of variables positivity and motivation. Of the 3 modelled networks it is apparent that the ring network provides the most equal, collective improvement in positivity and motivation for all users. Further study into other network topologies should be undertaken in the future.

  5. Molecular Simulations of Actomyosin Network Self-Assembly and Remodeling

    Science.gov (United States)

    Komianos, James; Popov, Konstantin; Papoian, Garegin; Papoian Lab Team

    Actomyosin networks are an integral part of the cytoskeleton of eukaryotic cells and play an essential role in determining cellular shape and movement. Actomyosin network growth and remodeling in vivo is based on a large number of chemical and mechanical processes, which are mutually coupled and spatially and temporally resolved. To investigate the fundamental principles behind the self-organization of these networks, we have developed a detailed mechanochemical, stochastic model of actin filament growth dynamics, at a single-molecule resolution, where the nonlinear mechanical rigidity of filaments and their corresponding deformations under internally and externally generated forces are taken into account. Our work sheds light on the interplay between the chemical and mechanical processes governing the cytoskeletal dynamics, and also highlights the importance of diffusional and active transport phenomena. Our simulations reveal how different actomyosin micro-architectures emerge in response to varying the network composition. Support from NSF Grant CHE-1363081.

  6. Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.

    Science.gov (United States)

    Shen, Lin; Wu, Jingheng; Yang, Weitao

    2016-10-11

    Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.

  7. SIMULATION OF WIRELESS SENSOR NETWORK WITH HYBRID TOPOLOGY

    Directory of Open Access Journals (Sweden)

    J. Jaslin Deva Gifty

    2016-03-01

    Full Text Available The design of low rate Wireless Personal Area Network (WPAN by IEEE 802.15.4 standard has been developed to support lower data rates and low power consuming application. Zigbee Wireless Sensor Network (WSN works on the network and application layer in IEEE 802.15.4. Zigbee network can be configured in star, tree or mesh topology. The performance varies from topology to topology. The performance parameters such as network lifetime, energy consumption, throughput, delay in data delivery and sensor field coverage area varies depending on the network topology. In this paper, designing of hybrid topology by using two possible combinations such as star-tree and star-mesh is simulated to verify the communication reliability. This approach is to combine all the benefits of two network model. The parameters such as jitter, delay and throughput are measured for these scenarios. Further, MAC parameters impact such as beacon order (BO and super frame order (SO for low power consumption and high channel utilization, has been analysed for star, tree and mesh topology in beacon disable mode and beacon enable mode by varying CBR traffic loads.

  8. Hybrid neural network bushing model for vehicle dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, Jeong Hyun [Pukyong National University, Busan (Korea, Republic of); Lee, Seung Kyu [Hyosung Corporation, Changwon (Korea, Republic of); Yoo, Wan Suk [Pusan National University, Busan (Korea, Republic of)

    2008-12-15

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  9. Modeling and simulation of the USAVRE network and radiology operations

    Science.gov (United States)

    Martinez, Ralph; Bradford, Daniel Q.; Hatch, Jay; Sochan, John; Chimiak, William J.

    1998-07-01

    The U.S. Army Medical Command, lead by the Brooke Army Medical Center, has embarked on a visionary project. The U.S. Army Virtual Radiology Environment (USAVRE) is a CONUS-based network that connects all the Army's major medical centers and Regional Medical Commands (RMC). The purpose of the USAVRE is to improve the quality, access, and cost of radiology services in the Army via the use of state-of-the-art medical imaging, computer, and networking technologies. The USAVRE contains multimedia viewing workstations; database archive systems are based on a distributed computing environment using Common Object Request Broker Architecture (CORBA) middleware protocols. The underlying telecommunications network is an ATM-based backbone network that connects the RMC regional networks and PACS networks at medical centers and RMC clinics. This project is a collaborative effort between Army, university, and industry centers with expertise in teleradiology and Global PACS applications. This paper describes a model and simulation of the USAVRE for performance evaluation purposes. As a first step the results of a Technology Assessment and Requirements Analysis (TARA) -- an analysis of the workload in Army radiology departments, their equipment and their staffing. Using the TARA data and other workload information, we have developed a very detailed analysis of the workload and workflow patterns of our Medical Treatment Facilities. We are embarking on modeling and simulation strategies, which will form the foundation for the VRE network. The workload analysis is performed for each radiology modality in a RMC site. The workload consists of the number of examinations per modality, type of images per exam, number of images per exam, and size of images. The frequency for store and forward cases, second readings, and interactive consultation cases are also determined. These parameters are translated into the model described below. The model for the USAVRE is hierarchical in nature

  10. Agent-Based Simulation Analysis for Network Formation

    OpenAIRE

    神原, 李佳; 林田, 智弘; 西﨑, 一郎; 片桐, 英樹

    2009-01-01

    In the mathematical models for network formation by Bala and Goyal(2000), it is shown that a star network is the strict Nash equilibrium. However, the result of the experiments in a laboratory using human subjects by Berninghaus et al.(2007) basing on the model of Bala and Goyal indicates that players reach a strict Nash equilibrium and deviate it. In this paper, an agent-based simulation model in which artificial adaptive agents have mechanisms of decision making and learning based on nueral...

  11. Correlated EEG Signals Simulation Based on Artificial Neural Networks.

    Science.gov (United States)

    Tomasevic, Nikola M; Neskovic, Aleksandar M; Neskovic, Natasa J

    2017-08-01

    In recent years, simulation of the human electroencephalogram (EEG) data found its important role in medical domain and neuropsychology. In this paper, a novel approach to simulation of two cross-correlated EEG signals is proposed. The proposed method is based on the principles of artificial neural networks (ANN). Contrary to the existing EEG data simulators, the ANN-based approach was leveraged solely on the experimentally acquired EEG data. More precisely, measured EEG data were utilized to optimize the simulator which consisted of two ANN models (each model responsible for generation of one EEG sequence). In order to acquire the EEG recordings, the measurement campaign was carried out on a healthy awake adult having no cognitive, physical or mental load. For the evaluation of the proposed approach, comprehensive quantitative and qualitative statistical analysis was performed considering probability distribution, correlation properties and spectral characteristics of generated EEG processes. The obtained results clearly indicated the satisfactory agreement with the measurement data.

  12. Code generation: a strategy for neural network simulators.

    Science.gov (United States)

    Goodman, Dan F M

    2010-10-01

    We demonstrate a technique for the design of neural network simulation software, runtime code generation. This technique can be used to give the user complete flexibility in specifying the mathematical model for their simulation in a high level way, along with the speed of code written in a low level language such as C+ +. It can also be used to write code only once but target different hardware platforms, including inexpensive high performance graphics processing units (GPUs). Code generation can be naturally combined with computer algebra systems to provide further simplification and optimisation of the generated code. The technique is quite general and could be applied to any simulation package. We demonstrate it with the 'Brian' simulator ( http://www.briansimulator.org ).

  13. Versatile Electric Propulsion Aircraft Testbed Project

    Data.gov (United States)

    National Aeronautics and Space Administration — An all-electric aircraft testbed is proposed to provide a dedicated development environment for the rigorous study and advancement of electrically powered aircraft....

  14. AMS San Diego Testbed - Calibration Data

    Data.gov (United States)

    Department of Transportation — The data in this repository were collected from the San Diego, California testbed, namely, I-15 from the interchange with SR-78 in the north to the interchange with...

  15. Habitat Testbed (HaT) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Goals of the DSH Testbed include:Function as a habitat systems integrator and technology pull across many domainsDevelop and integrate software-based models of...

  16. Design and Simulation Analysis for Integrated Vehicle Chassis-Network Control System Based on CAN Network

    Directory of Open Access Journals (Sweden)

    Wei Yu

    2016-01-01

    Full Text Available Due to the different functions of the system used in the vehicle chassis control, the hierarchical control strategy also leads to many kinds of the network topology structure. According to the hierarchical control principle, this research puts forward the integrated control strategy of the chassis based on supervision mechanism. The purpose is to consider how the integrated control architecture affects the control performance of the system after the intervention of CAN network. Based on the principle of hierarchical control and fuzzy control, a fuzzy controller is designed, which is used to monitor and coordinate the ESP, AFS, and ARS. And the IVC system is constructed with the upper supervisory controller and three subcontrol systems on the Simulink platform. The network topology structure of IVC is proposed, and the IVC communication matrix based on CAN network communication is designed. With the common sensors and the subcontrollers as the CAN network independent nodes, the network induced delay and packet loss rate on the system control performance are studied by simulation. The results show that the simulation method can be used for designing the communication network of the vehicle.

  17. Implementation of standard testbeds for numerical relativity

    Energy Technology Data Exchange (ETDEWEB)

    Babiuc, M C [Department of Physics and Physical Science, Marshall University, Huntington, WV 25755 (United States); Husa, S [Friedrich Schiller University Jena, Max-Wien-Platz 1, 07743 Jena (Germany); Alic, D [Department of Physics, University of the Balearic Islands, Cra Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Hinder, I [Center for Gravitational Wave Physics, Pennsylvania State University, University Park, PA 16802 (United States); Lechner, C [Weierstrass Institute for Applied Analysis and Stochastics (WIAS), Mohrenstrasse 39, 10117 Berlin (Germany); Schnetter, E [Center for Computation and Technology, 216 Johnston Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Szilagyi, B; Dorband, N; Pollney, D; Winicour, J [Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Am Muehlenberg 1, 14076 Golm (Germany); Zlochower, Y [Center for Computational Relativity and Gravitation, School of Mathematical Sciences, Rochester Institute of Technology, 78 Lomb Memorial Drive, Rochester, New York 14623 (United States)

    2008-06-21

    We discuss results that have been obtained from the implementation of the initial round of testbeds for numerical relativity which was proposed in the first paper of the Apples with Apples Alliance. We present benchmark results for various codes which provide templates for analyzing the testbeds and to draw conclusions about various features of the codes. This allows us to sharpen the initial test specifications, design a new test and add theoretical insight.

  18. Artificial neural network based approach to EEG signal simulation.

    Science.gov (United States)

    Tomasevic, Nikola M; Neskovic, Aleksandar M; Neskovic, Natasa J

    2012-06-01

    In this paper a new approach to the electroencephalogram (EEG) signal simulation based on the artificial neural networks (ANN) is proposed. The aim was to simulate the spontaneous human EEG background activity based solely on the experimentally acquired EEG data. Therefore, an EEG measurement campaign was conducted on a healthy awake adult in order to obtain an adequate ANN training data set. As demonstration of the performance of the ANN based approach, comparisons were made against autoregressive moving average (ARMA) filtering based method. Comprehensive quantitative and qualitative statistical analysis showed clearly that the EEG process obtained by the proposed method was in satisfactory agreement with the one obtained by measurements.

  19. Frequency and motivational state: evolutionary simulations suggest an adaptive function for network oscillations

    NARCIS (Netherlands)

    Heerebout, B.T.; Phaf, R.H.; Taatgen, N.A.; van Rijn, H.

    2009-01-01

    Evolutionary simulations of foraging agents, controlled by artificial neural networks, unexpectedly yielded oscillating node activations in the networks. The agents had to navigate a virtual environment to collect food while avoiding predation. Between generations their neural networks were

  20. A spectral phase encoded time spreading optical code division multiple access testbed

    Science.gov (United States)

    Cong, Wei

    Spectral phase-encode time-spreading (SPECTS) is a promising technique for optical code-division multiple-access (O-CDMA) networks. In SPECTS O-CDMA, users are multiplexed and distinguished with unique phase codes impressed on the spectrum of ultra-short optical signal pulses. The encoding and decoding processes are implemented with grating/lens zero-dispersion pulse-shapers using spatial light phase modulators (SLPMs) in the Fourier plane. The SLPMs add the encoding or decoding O-CDMA phase codes on the spectrum which is spatially spread and then recombined by the grating(s) and lens(es). In this dissertation, we demonstrate a SPECTS O-CDMA network testbed with large throughputs and high performances. Two types of fiber-pigtailed bulk-optic pulseshapers, 1-D transparent-mode and 2-D reflective-mode, are constructed to be used as SPECTS O-CDMA encoders and decoders in the testbed. In order to support a large number of users, the testbed includes up to eight SPECTS O-CDMA encoding channels and one decoding channel. With the combination of time-slotting and polarization-multiplexing operations, the maximum throughput achieved on the testbed is 32 users (8 O-CDMA channel x 2 time slots x 2 polarizations) at 10 Gb/s/user. Two nonlinear detecting techniques (thresholding and gating) for coherent SPECTS O-CDMA are discussed in the dissertation. A nonlinear thresholder and a nonlinear optical loop mirror (NOLM) time gate are built in the testbed; each uses a 500-m highly nonlinear fiber(HNLF). An "O-CDMA receiver" including the NOLM and the thresholder followed by an optical-electrical (O-E) converter can distinguish the intended user by demultiplexing the time-slotted signal and then suppressing the multi-user interference (MUI). In the bit-error-rate (BER) statistics experiments, the "O-CDMA receiver" can detect the intended user's signal with error-free or low BER performance in various testbed structures. Network-layer application experiments including video

  1. A simulated annealing approach for redesigning a warehouse network problem

    Science.gov (United States)

    Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia

    2017-09-01

    Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.

  2. Computer simulation of randomly cross-linked polymer networks

    CERN Document Server

    Williams, T P

    2002-01-01

    In this work, Monte Carlo and Stochastic Dynamics computer simulations of mesoscale model randomly cross-linked networks were undertaken. Task parallel implementations of the lattice Monte Carlo Bond Fluctuation model and Kremer-Grest Stochastic Dynamics bead-spring continuum model were designed and used for this purpose. Lattice and continuum precursor melt systems were prepared and then cross-linked to varying degrees. The resultant networks were used to study structural changes during deformation and relaxation dynamics. The effects of a random network topology featuring a polydisperse distribution of strand lengths and an abundance of pendant chain ends, were qualitatively compared to recent published work. A preliminary investigation into the effects of temperature on the structural and dynamical properties was also undertaken. Structural changes during isotropic swelling and uniaxial deformation, revealed a pronounced non-affine deformation dependant on the degree of cross-linking. Fractal heterogeneiti...

  3. NCC Simulation Model: Simulating the operations of the network control center, phase 2

    Science.gov (United States)

    Benjamin, Norman M.; Paul, Arthur S.; Gill, Tepper L.

    1992-12-01

    The simulation of the network control center (NCC) is in the second phase of development. This phase seeks to further develop the work performed in phase one. Phase one concentrated on the computer systems and interconnecting network. The focus of phase two will be the implementation of the network message dialogues and the resources controlled by the NCC. These resources are requested, initiated, monitored and analyzed via network messages. In the NCC network messages are presented in the form of packets that are routed across the network. These packets are generated, encoded, decoded and processed by the network host processors that generate and service the message traffic on the network that connects these hosts. As a result, the message traffic is used to characterize the work done by the NCC and the connected network. Phase one of the model development represented the NCC as a network of bi-directional single server queues and message generating sources. The generators represented the external segment processors. The served based queues represented the host processors. The NCC model consists of the internal and external processors which generate message traffic on the network that links these hosts. To fully realize the objective of phase two it is necessary to identify and model the processes in each internal processor. These processes live in the operating system of the internal host computers and handle tasks such as high speed message exchanging, ISN and NFE interface, event monitoring, network monitoring, and message logging. Inter process communication is achieved through the operating system facilities. The overall performance of the host is determined by its ability to service messages generated by both internal and external processors.

  4. Analyzing, Modeling, and Simulation for Human Dynamics in Social Network

    Directory of Open Access Journals (Sweden)

    Yunpeng Xiao

    2012-01-01

    Full Text Available This paper studies the human behavior in the top-one social network system in China (Sina Microblog system. By analyzing real-life data at a large scale, we find that the message releasing interval (intermessage time obeys power law distribution both at individual level and at group level. Statistical analysis also reveals that human behavior in social network is mainly driven by four basic elements: social pressure, social identity, social participation, and social relation between individuals. Empirical results present the four elements' impact on the human behavior and the relation between these elements. To further understand the mechanism of such dynamic phenomena, a hybrid human dynamic model which combines “interest” of individual and “interaction” among people is introduced, incorporating the four elements simultaneously. To provide a solid evaluation, we simulate both two-agent and multiagent interactions with real-life social network topology. We achieve the consistent results between empirical studies and the simulations. The model can provide a good understanding of human dynamics in social network.

  5. Neural network stochastic simulation applied for quantifying uncertainties

    Directory of Open Access Journals (Sweden)

    N Foudil-Bey

    2016-09-01

    Full Text Available Generally the geostatistical simulation methods are used to generate several realizations of physical properties in the sub-surface, these methods are based on the variogram analysis and limited to measures correlation between variables at two locations only. In this paper, we propose a simulation of properties based on supervised Neural network training at the existing drilling data set. The major advantage is that this method does not require a preliminary geostatistical study and takes into account several points. As a result, the geological information and the diverse geophysical data can be combined easily. To do this, we used a neural network with multi-layer perceptron architecture like feed-forward, then we used the back-propagation algorithm with conjugate gradient technique to minimize the error of the network output. The learning process can create links between different variables, this relationship can be used for interpolation of the properties on the one hand, or to generate several possible distribution of physical properties on the other hand, changing at each time and a random value of the input neurons, which was kept constant until the period of learning. This method was tested on real data to simulate multiple realizations of the density and the magnetic susceptibility in three-dimensions at the mining camp of Val d'Or, Québec (Canada.

  6. [Simulation of lung motions using an artificial neural network].

    Science.gov (United States)

    Laurent, R; Henriet, J; Salomon, M; Sauget, M; Nguyen, F; Gschwind, R; Makovicka, L

    2011-04-01

    A way to improve the accuracy of lung radiotherapy for a patient is to get a better understanding of its lung motion. Indeed, thanks to this knowledge it becomes possible to follow the displacements of the clinical target volume (CTV) induced by the lung breathing. This paper presents a feasibility study of an original method to simulate the positions of points in patient's lung at all breathing phases. This method, based on an artificial neural network, allowed learning the lung motion on real cases and then to simulate it for new patients for which only the beginning and the end breathing data are known. The neural network learning set is made up of more than 600 points. These points, shared out on three patients and gathered on a specific lung area, were plotted by a MD. The first results are promising: an average accuracy of 1mm is obtained for a spatial resolution of 1 × 1 × 2.5mm(3). We have demonstrated that it is possible to simulate lung motion with accuracy using an artificial neural network. As future work we plan to improve the accuracy of our method with the addition of new patient data and a coverage of the whole lungs. Copyright © 2010 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  7. Modelling Altitude Information in Two-Dimensional Traffic Networks for Electric Mobility Simulation

    OpenAIRE

    Diogo Santos; José Pinto; Rossetti, Rosaldo J. F.; Eugénio Oliveira

    2016-01-01

    Elevation data is important for electric vehicle simulation. However, traffic simulators are often two-dimensional and do not offer the capability of modelling urban networks taking elevation into account. Specifically, SUMO - Simulation of Urban Mobility, a popular microscopic traffic simulator, relies on networks previously modelled with elevation data as to provide this information during simulations. This work tackles the problem of adding elevation data to urban network models - particul...

  8. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  9. The CMS Integration Grid Testbed

    CERN Document Server

    Graham, G E; Aziz, Shafqat; Bauerdick, L.A.T.; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yu-jun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge Luis; Kategari, Suchindra; Couvares, Peter; DeSmet, Alan; Livny, Miron; Roy, Alain; Tannenbaum, Todd; Graham, Gregory E.; Aziz, Shafqat; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yujun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge; Kategari, Suchindra; Couvares, Peter; Smet, Alan De; Livny, Miron; Roy, Alain; Tannenbaum, Todd

    2003-01-01

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distrib ution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuo us two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. ...

  10. The CMS integration grid testbed

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Gregory E.

    2004-08-26

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distribution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuous two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.

  11. Network Flow Simulation of Fluid Transients in Rocket Propulsion Systems

    Science.gov (United States)

    Bandyopadhyay, Alak; Hamill, Brian; Ramachandran, Narayanan; Majumdar, Alok

    2011-01-01

    Fluid transients, also known as water hammer, can have a significant impact on the design and operation of both spacecraft and launch vehicle propulsion systems. These transients often occur at system activation and shutdown. The pressure rise due to sudden opening and closing of valves of propulsion feed lines can cause serious damage during activation and shutdown of propulsion systems. During activation (valve opening) and shutdown (valve closing), pressure surges must be predicted accurately to ensure structural integrity of the propulsion system fluid network. In the current work, a network flow simulation software (Generalized Fluid System Simulation Program) based on Finite Volume Method has been used to predict the pressure surges in the feed line due to both valve closing and valve opening using two separate geometrical configurations. The valve opening pressure surge results are compared with experimental data available in the literature and the numerical results compared very well within reasonable accuracy (simulation results are compared with the results of Method of Characteristics. Most rocket engines experience a longitudinal acceleration, known as "pogo" during the later stage of engine burn. In the shutdown example problem, an accumulator has been used in the feed system to demonstrate the "pogo" mitigation effects in the feed system of propellant. The simulation results using GFSSP compared very well with the results of Method of Characteristics.

  12. Efficiently passing messages in distributed spiking neural network simulation.

    Science.gov (United States)

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  13. An artifical neural network for detection of simulated dental caries

    Energy Technology Data Exchange (ETDEWEB)

    Kositbowornchai, S. [Khon Kaen Univ. (Thailand). Dept. of Oral Diagnosis; Siriteptawee, S.; Plermkamon, S.; Bureerat, S. [Khon Kaen Univ. (Thailand). Dept. of Mechanical Engineering; Chetchotsak, D. [Khon Kaen Univ. (Thailand). Dept. of Industrial Engineering

    2006-08-15

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  14. Biochemical Network Stochastic Simulator (BioNetS: software for stochastic modeling of biochemical networks

    Directory of Open Access Journals (Sweden)

    Elston Timothy C

    2004-03-01

    Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  15. Stochastic Characterization of Communication Network Latency for Wide Area Grid Control Applications.

    Energy Technology Data Exchange (ETDEWEB)

    Ameme, Dan Selorm Kwami [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guttromson, Ross [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-10-01

    This report characterizes communications network latency under various network topologies and qualities of service (QoS). The characterizations are probabilistic in nature, allowing deeper analysis of stability for Internet Protocol (IP) based feedback control systems used in grid applications. The work involves the use of Raspberry Pi computers as a proxy for a controlled resource, and an ns-3 network simulator on a Linux server to create an experimental platform (testbed) that can be used to model wide-area grid control network communications in smart grid. Modbus protocol is used for information transport, and Routing Information Protocol is used for dynamic route selection within the simulated network.

  16. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  17. Analysis of 2D Torus and Hub Topologies of 100Mb/s Ethernet for the Whitney Commodity Computing Testbed

    Science.gov (United States)

    Pedretti, Kevin T.; Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    A variety of different network technologies and topologies are currently being evaluated as part of the Whitney Project. This paper reports on the implementation and performance of a Fast Ethernet network configured in a 4x4 2D torus topology in a testbed cluster of 'commodity' Pentium Pro PCs. Several benchmarks were used for performance evaluation: an MPI point to point message passing benchmark, an MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2). Our results show that for point to point communication on an unloaded network, the hub and 1 hop routes on the torus have about the same bandwidth and latency. However, the bandwidth decreases and the latency increases on the torus for each additional route hop. Collective communication benchmarks show that the torus provides roughly four times more aggregate bandwidth and eight times faster MPI barrier synchronizations than a hub based network for 16 processor systems. Finally, the SOAPBOX benchmarks, which simulate real-world CFD applications, generally demonstrated substantially better performance on the torus than on the hub. In the few cases the hub was faster, the difference was negligible. In total, our experimental results lead to the conclusion that for Fast Ethernet networks, the torus topology has better performance and scales better than a hub based network.

  18. Real-world experimentation of distributed DSA network algorithms

    DEFF Research Database (Denmark)

    Tonelli, Oscar; Berardinelli, Gilberto; Tavares, Fernando Menezes Leitão

    2013-01-01

    of the available spectrum by nodes in a network, without centralized coordination. While proof-of-concept and statistical validation of such algorithms is typically achieved by using system level simulations, experimental activities are valuable contributions for the investigation of particular aspects......The problem of spectrum scarcity in uncoordinated and/or heterogeneous wireless networks is the key aspect driving the research in the field of flexible management of frequency resources. In particular, distributed dynamic spectrum access (DSA) algorithms enable an efficient sharing...... such as a dynamic propagation environment, human presence impact and terminals mobility. This chapter focuses on the practical aspects related to the real world-experimentation with distributed DSA network algorithms over a testbed network. Challenges and solutions are extensively discussed, from the testbed design...

  19. Computer Simulations of Bottlebrush Melts and Soft Networks

    Science.gov (United States)

    Cao, Zhen; Carrillo, Jan-Michael; Sheiko, Sergei; Dobrynin, Andrey

    We have studied dense bottlebrush systems in a melt and network state using a combination of the molecular dynamics simulations and analytical calculations. Our simulations show that the bottlebrush macromolecules in a melt behave as ideal chains with the effective Kuhn length bK. The bottlebrush induced bending rigidity is due to redistribution of the side chains upon backbone bending. Kuhn length of the bottlebrushes increases with increasing the side-chain degree of polymerization nsc as bK ~nsc0 . 46 . This model of bottlebrush macromolecules is extended to describe mechanical properties of bottlebrush networks in linear and nonlinear deformation regimes. In the linear deformation regime, the network shear modulus scales with the degree of polymerization of the side chains as G0 ~nsc + 1 - 1 as long as the ratio of the Kuhn length to the size of the fully extended bottlebrush backbone between crosslinks, Rmax, is smaller than unity, bK /Rmax crosslinks. Nsf DMR-1409710 DMR-1436201.

  20. Quantum versus simulated annealing in wireless interference network optimization.

    Science.gov (United States)

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-05-16

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking-more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.

  1. Leader neurons in leaky integrate and fire neural network simulations.

    Science.gov (United States)

    Zbinden, Cyrille

    2011-10-01

    In this paper, we highlight the topological properties of leader neurons whose existence is an experimental fact. Several experimental studies show the existence of leader neurons in population bursts of activity in 2D living neural networks (Eytan and Marom, J Neurosci 26(33):8465-8476, 2006; Eckmann et al., New J Phys 10(015011), 2008). A leader neuron is defined as a neuron which fires at the beginning of a burst (respectively network spike) more often than we expect by chance considering its mean firing rate. This means that leader neurons have some burst triggering power beyond a chance-level statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model (Gerstner and Kistler 2002; Cessac, J Math Biol 56(3):311-345, 2008), which allows fast simulations (Izhikevich, IEEE Trans Neural Netw 15(5):1063-1070, 2004; Gerstner and Naud, Science 326:379-380, 2009). The dynamics of our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themselves. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends signals to many excitatory neurons as well as to few inhibitory neurons and a leader neuron receives only signals from few other excitatory neurons. Our linear analysis exhibits five essential properties of leader neurons each with different relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of

  2. Analysis of sensor network observations during some simulated landslide experiments

    Science.gov (United States)

    Scaioni, M.; Lu, P.; Feng, T.; Chen, W.; Wu, H.; Qiao, G.; Liu, C.; Tong, X.; Li, R.

    2012-12-01

    A multi-sensor network was tested during some experiments on a landslide simulation platform established at Tongji University (Shanghai, P.R. China). Here landslides were triggered by means of artificial rainfall (see Figure 1). The sensor network currently incorporates contact sensors and two imaging systems. This represent a novel solution, because the spatial sensor network incorporate either contact sensors and remote sensors (video-cameras). In future, these sensors will be installed on two real ground slopes in Sichuan province (South-West China), where Wenchuan earthquake occurred in 2008. This earthquake caused the immediate activation of several landslide, while other area became unstable and still are a menace for people and properties. The platform incorporates the reconstructed scale slope, sensor network, communication system, database and visualization system. Some landslide simulation experiments allowed ascertaining which sensors could be more suitable to be deployed in Wenchuan area. The poster will focus on the analysis of results coming from down scale simulations. Here the different steps of the landslide evolution can be followed on the basis of sensor observations. This include underground sensors to detect the water table level and the pressure in the ground, a set of accelerometers and two inclinometers. In the first part of the analysis the full data series are investigated to look for correlations and common patterns, as well as to link them to the physical processes. In the second, 4 subsets of sensors located in neighbor positions are analyzed. The analysis of low- and high-speed image sequences allowed to track a dense field of displacement on the slope surface. These outcomes have been compared to the ones obtained from accelerometers for cross-validation. Images were also used for the photogrammetric reconstruction of the slope topography during the experiment. Consequently, volume computation and mass movements could be evaluated on

  3. Artificial neural network simulator for SOFC performance prediction

    Science.gov (United States)

    Arriagada, Jaime; Olausson, Pernilla; Selimovic, Azra

    This paper describes the development of a novel modelling tool for evaluation of solid oxide fuel cell (SOFC) performance. An artificial neural network (ANN) is trained with a reduced amount of data generated by a validated cell model, and it is then capable of learning the generic functional relationship between inputs and outputs of the system. Once the network is trained, the ANN-driven simulator can predict different operational parameters of the SOFC (i.e. gas flows, operational voltages, current density, etc.) avoiding the detailed description of the fuel cell processes. The highly parallel connectivity within the ANN further reduces the computational time. In a real case, the necessary data for training the ANN simulator would be extracted from experiments. This simulator could be suitable for different applications in the fuel cell field, such as, the construction of performance maps and operating point optimisation and analysis. All this is performed with minimum time demand and good accuracy. This intelligent model together with the operational conditions may provide useful insight into SOFC operating characteristics and improved means of selecting operating conditions, reducing costs and the need for extensive experiments.

  4. COEL: A Cloud-based Reaction Network Simulator

    Directory of Open Access Journals (Sweden)

    Peter eBanda

    2016-04-01

    Full Text Available Chemical Reaction Networks (CRNs are a formalism to describe the macroscopic behavior of chemical systems. We introduce COEL, a web- and cloud-based CRN simulation framework that does not require a local installation, runs simulations on a large computational grid, provides reliable database storage, and offers a visually pleasing and intuitive user interface. We present an overview of the underlying software, the technologies, and the main architectural approaches employed. Some of COEL's key features include ODE-based simulations of CRNs and multicompartment reaction networks with rich interaction options, a built-in plotting engine, automatic DNA-strand displacement transformation and visualization, SBML/Octave/Matlab export, and a built-in genetic-algorithm-based optimization toolbox for rate constants.COEL is an open-source project hosted on GitHub (http://dx.doi.org/10.5281/zenodo.46544, which allows interested research groups to deploy it on their own sever. Regular users can simply use the web instance at no cost at http://coel-sim.org. The framework is ideally suited for a collaborative use in both research and education.

  5. A Network Scheduling Model for Distributed Control Simulation

    Science.gov (United States)

    Culley, Dennis; Thomas, George; Aretskin-Hariton, Eliot

    2016-01-01

    Distributed engine control is a hardware technology that radically alters the architecture for aircraft engine control systems. Of its own accord, it does not change the function of control, rather it seeks to address the implementation issues for weight-constrained vehicles that can limit overall system performance and increase life-cycle cost. However, an inherent feature of this technology, digital communication networks, alters the flow of information between critical elements of the closed-loop control. Whereas control information has been available continuously in conventional centralized control architectures through virtue of analog signaling, moving forward, it will be transmitted digitally in serial fashion over the network(s) in distributed control architectures. An underlying effect is that all of the control information arrives asynchronously and may not be available every loop interval of the controller, therefore it must be scheduled. This paper proposes a methodology for modeling the nominal data flow over these networks and examines the resulting impact for an aero turbine engine system simulation.

  6. Virtual Pipeline System Testbed to Optimize the U.S. Natural Gas Transmission Pipeline System

    Energy Technology Data Exchange (ETDEWEB)

    Kirby S. Chapman; Prakash Krishniswami; Virg Wallentine; Mohammed Abbaspour; Revathi Ranganathan; Ravi Addanki; Jeet Sengupta; Liubo Chen

    2005-06-01

    The goal of this project is to develop a Virtual Pipeline System Testbed (VPST) for natural gas transmission. This study uses a fully implicit finite difference method to analyze transient, nonisothermal compressible gas flow through a gas pipeline system. The inertia term of the momentum equation is included in the analysis. The testbed simulate compressor stations, the pipe that connects these compressor stations, the supply sources, and the end-user demand markets. The compressor station is described by identifying the make, model, and number of engines, gas turbines, and compressors. System operators and engineers can analyze the impact of system changes on the dynamic deliverability of gas and on the environment.

  7. The Development of a Smart Distribution Grid Testbed for Integrated Information Management Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Ning; Du, Pengwei; Paulson, Patrick R.; Greitzer, Frank L.; Guo, Xinxin; Hadley, Mark D.

    2011-07-28

    This paper presents a smart distribution grid testbed to test or compare designs of integrated information management systems (I2MSs). An I2MS extracts and synthesizes information from a wide range of data sources to detect abnormal system behaviors, identify possible causes, assess the system status, and provide grid operators with response suggestions. The objective of the testbed is to provide a modeling environment with sufficient data sources for the I2MS design. The testbed includes five information layers and a physical layer; it generates multi-layer chronological data based on actual measurement playbacks or simulated data sets produced by the physical layer. The testbed models random hardware failures, human errors, extreme weather events, and deliberate tampering attempts to allow users to evaluate the performance of different I2MS designs. Initial results of I2MS performance tests showed that the testbed created a close-to-real-world environment that allowed key performance metrics of the I2MS to be evaluated.

  8. The Development of a Smart Distribution Grid Testbed for Integrated Information Management Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Ning; Du, Pengwei; Paulson, Patrick R.; Greitzer, Frank L.; Guo, Xinxin; Hadley, Mark D.

    2011-01-31

    This paper presents a smart distribution grid testbed to test or compare designs of integrated information management systems (I2MSs). An I2MS extracts and synthesizes information from a wide range of data sources to detect abnormal system behaviors, identify possible causes, assess the system status, and provide grid operators with response suggestions. The objective of the testbed is to provide a modeling environment with sufficient data sources for the I2MS design. The testbed includes five information layers and a physical layer; it generates multi-layer chronological data based on actual measurement playbacks or simulated data sets produced by the physical layer. The testbed models random hardware failures, human errors, extreme weather events, and deliberate tampering attempts to allow users to evaluate the performance of different I2MS designs. Initial results of I2MS performance tests showed that the testbed created a close-to-real-world environment that allowed key performance metrics of the I2MS to be evaluated

  9. Simulation of heart rate variability model in a network

    Science.gov (United States)

    Cascaval, Radu C.; D'Apice, Ciro; D'Arienzo, Maria Pia

    2017-07-01

    We consider a 1-D model for the simulation of the blood flow in the cardiovascular system. As inflow condition we consider a model for the aortic valve. The opening and closing of the valve is dynamically determined by the pressure difference between the left ventricular and aortic pressures. At the outflow we impose a peripheral resistance model. To approximate the solution we use a numerical scheme based on the discontinuous Galerkin method. We also considering a variation in heart rate and terminal reflection coefficient due to monitoring of the pressure in the network.

  10. DC Collection Network Simulation for Offshore Wind Farms

    DEFF Research Database (Denmark)

    Vogel, Stephan; Rasmussen, Tonny Wederberg; El-Khatib, Walid Ziad

    2015-01-01

    The possibility to connect offshore wind turbines with a collection network based on Direct Current (DC), instead of Alternating Current (AC), gained attention in the scientific and industrial environment. There are many promising properties of DC components that could be beneficial such as......: smaller dimensions, less weight, fewer conductors, no reactive power considerations, and less overall losses due to the absence of proximity and skin effects. This work describes a study about the simulation of a Medium Voltage DC (MVDC) grid in an offshore wind farm. Suitable converter concepts...

  11. Exploration Systems Health Management Facilities and Testbed Workshop

    Science.gov (United States)

    Wilson, Scott; Waterman, Robert; McCleskey, Carey

    2004-01-01

    Presentation Agenda : (1) Technology Maturation Pipeline (The Plan) (2) Cryogenic testbed (and other KSC Labs) (2a) Component / Subsystem technologies (3) Advanced Technology Development Center (ATDC) (3a) System / Vehic1e technologies (4) EL V Flight Experiments (Flight Testbeds).

  12. Development of a Tethered Formation Flight Testbed for ISS Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The development of a testbed for the development and demonstration of technologies needed by tethered formation flying satellites is proposed. Such a testbed would...

  13. The Planets Testbed: Science for Digital Preservation

    Directory of Open Access Journals (Sweden)

    Seamus Ross

    2008-06-01

    Full Text Available The preservation of digital objects requires specific software tools or services. These can be characterisation tools that abstract the essential characteristics of a digital object from a file, migration tools that convert digital objects to different formats, or emulation tools that render digital objects in their original context on a new infrastructure. Until recently digital preservation has been characterised by practices and processes that could best be described as more art and craft than science. The Planets Testbed provides a controlled environment where preservation tools can be tested and evaluated, and where experiment results can be empirically compared. This paper presents an overview of the Testbed application, an analysis of the experiment methodology and a description of the Testbed's web service approach.

  14. Dynamic virtual GMPLS-controlled WSON using a Resource Broker with a VNT Manager on the ADRENALINE testbed.

    Science.gov (United States)

    Vilalta, Ricard; Muñoz, Raul; Casellas, Ramon; Martinez, Ricardo

    2012-12-31

    We present a Resource Broker with a Virtual Network Topology Manager (VNTM) which dynamically deploys virtual GMPLS-controlled WSON networks. Virtual Optical links are constructed by grouping established optical connections which are managed by the VNTM. We evaluate the performance of the Resource Broker in the ADRENALINE testbed.

  15. Coarse-graining stochastic biochemical networks: adiabaticity and fast simulations

    Energy Technology Data Exchange (ETDEWEB)

    Nemenman, Ilya [Los Alamos National Laboratory; Sinitsyn, Nikolai [Los Alamos National Laboratory; Hengartner, Nick [Los Alamos National Laboratory

    2008-01-01

    We propose a universal approach for analysis and fast simulations of stiff stochastic biochemical kinetics networks, which rests on elimination of fast chemical species without a loss of information about mesoscoplc, non-Poissonian fluctuations of the slow ones. Our approach, which is similar to the Born-Oppenhelmer approximation in quantum mechanics, follows from the stochastic path Integral representation of the cumulant generating function of reaction events. In applications with a small number of chemIcal reactions, It produces analytical expressions for cumulants of chemical fluxes between the slow variables. This allows for a low-dimensional, Interpretable representation and can be used for coarse-grained numerical simulation schemes with a small computational complexity and yet high accuracy. As an example, we derive the coarse-grained description for a chain of biochemical reactions, and show that the coarse-grained and the microscopic simulations are in an agreement, but the coarse-gralned simulations are three orders of magnitude faster.

  16. (DURIP) MIMO Radar Testbed for Waveform Adaptive Sensing Research

    Science.gov (United States)

    2015-06-17

    SDR ) testbed. The testbed consists of 14 micro SDR platforms with two transmit and one receive antennas and a standalone SDR with 4 transmit and 4...receive channels multiplexed to 32 x 32 antenna array through a switching matrix. These SDR platforms can adaptively modify both transmit waveforms... SDR ) testbed. The testbed consists of 14 micro SDR platforms with two transmit and one receive antennas and a standalone SDR with 4 transmit and 4

  17. The Benchmark Extensible Tractable Testbed Engineering Resource (BETTER)

    Energy Technology Data Exchange (ETDEWEB)

    Siranosian, Antranik Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Schembri, Philip Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Miller, Nathan Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-02

    The Benchmark Extensible Tractable Testbed Engineering Resource (BETTER) is proposed as a family of modular test bodies that are intended to support engineering capability development by helping to identify weaknesses and needs. Weapon systems, subassemblies, and components are often complex and difficult to test and analyze, resulting in low confidence and high uncertainties in experimental and simulated results. The complexities make it difficult to distinguish between inherent uncertainties and errors due to insufficient capabilities. BETTER test bodies will first use simplified geometries and materials such that testing, data collection, modeling and simulation can be accomplished with high confidence and low uncertainty. Modifications and combinations of simple and well-characterized BETTER test bodies can then be used to increase complexity in order to reproduce relevant mechanics and identify weaknesses. BETTER can provide both immediate and long-term improvements in testing and simulation capabilities. This document presents the motivation, concept, benefits and examples for BETTER.

  18. Towards a testbed for malicious code detection

    Energy Technology Data Exchange (ETDEWEB)

    Lo, R.; Kerchen, P.; Crawford, R.; Ho, W.; Crossley, J.; Fink, G.; Levitt, K.; Olsson, R.; Archer, M. (California Univ., Davis, CA (USA). Div. of Computer Science)

    1991-01-01

    This paper proposes an environment for detecting many types of malicious code, including computer viruses, Trojan horses, and time/logic bombs. This malicious code testbed (MCT) is based upon both static and dynamic analysis tools developed at the University of California, Davis, which have been shown to be effective against certain types of malicious code. The testbed extends the usefulness of these tools by using them in a complementary fashion to detect more general cases of malicious code. Perhaps more importantly, the MCT allows administrators and security analysts to check a program before installation, thereby avoiding any damage a malicious program might inflict. 5 refs., 2 figs., 2 tabs.

  19. Wireless Power Transfer Protocols in Sensor Networks: Experiments and Simulations

    Directory of Open Access Journals (Sweden)

    Sotiris Nikoletseas

    2017-04-01

    Full Text Available Rapid technological advances in the domain of Wireless Power Transfer pave the way for novel methods for power management in systems of wireless devices, and recent research works have already started considering algorithmic solutions for tackling emerging problems. In this paper, we investigate the problem of efficient and balanced Wireless Power Transfer in Wireless Sensor Networks. We employ wireless chargers that replenish the energy of network nodes. We propose two protocols that configure the activity of the chargers. One protocol performs wireless charging focused on the charging efficiency, while the other aims at proper balance of the chargers’ residual energy. We conduct detailed experiments using real devices and we validate the experimental results via larger scale simulations. We observe that, in both the experimental evaluation and the evaluation through detailed simulations, both protocols achieve their main goals. The Charging Oriented protocol achieves good charging efficiency throughout the experiment, while the Energy Balancing protocol achieves a uniform distribution of energy within the chargers.

  20. Quantum versus simulated annealing in wireless interference network optimization

    Science.gov (United States)

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-05-01

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.

  1. Human Exploration Spacecraft Testbed for Integration and Advancement (HESTIA)

    Science.gov (United States)

    Banker, Brian F.; Robinson, Travis

    2016-01-01

    The proposed paper will cover ongoing effort named HESTIA (Human Exploration Spacecraft Testbed for Integration and Advancement), led at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) to promote a cross-subsystem approach to developing Mars-enabling technologies with the ultimate goal of integrated system optimization. HESTIA also aims to develop the infrastructure required to rapidly test these highly integrated systems at a low cost. The initial focus is on the common fluids architecture required to enable human exploration of mars, specifically between life support and in-situ resource utilization (ISRU) subsystems. An overview of the advancements in both integrated technologies, in infrastructure, in simulation, and in modeling capabilities will be presented, as well as the results and findings of integrated testing,. Due to the enormous mass gear-ratio required for human exploration beyond low-earth orbit, (for every 1 kg of payload landed on Mars, 226 kg will be required on Earth), minimization of surface hardware and commodities is paramount. Hardware requirements can be minimized by reduction of equipment performing similar functions though for different subsystems. If hardware could be developed which meets the requirements of both life support and ISRU it could result in the reduction of primary hardware and/or reduction in spares. Minimization of commodities to the surface of mars can be achieved through the creation of higher efficiency systems producing little to no undesired waste, such as a closed-loop life support subsystem. Where complete efficiency is impossible or impractical, makeup commodities could be manufactured via ISRU. Although, utilization of ISRU products (oxygen and water) for crew consumption holds great promise of reducing demands on life support hardware, there exist concerns as to the purity and transportation of commodities. To date, ISRU has been focused on production rates and purities for

  2. Validating module network learning algorithms using simulated data.

    Science.gov (United States)

    Michoel, Tom; Maere, Steven; Bonnet, Eric; Joshi, Anagha; Saeys, Yvan; Van den Bulcke, Tim; Van Leemput, Koenraad; van Remortel, Piet; Kuiper, Martin; Marchal, Kathleen; Van de Peer, Yves

    2007-05-03

    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Despite the demonstrated success of such algorithms in uncovering biologically relevant regulatory relations, further developments in the area are hampered by a lack of tools to compare the performance of alternative module network learning strategies. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators. We show that data simulators such as SynTReN are very well suited for the purpose of developing, testing and improving module network

  3. An Experimental Study of Advanced Receivers in a Practical Dense Small Cells Network

    DEFF Research Database (Denmark)

    Assefa, Dereje; Berardinelli, Gilberto; Tavares, Fernando Menezes Leitão

    2016-01-01

    been obtained using a software defined radio (SDR) testbed network with 12 testbed nodes, configured as either access point or user equipment. Each node features a 4 X 4$ or a 2 X 2 MIMO configuration. The results demonstrate that advanced receivers with a larger MIMO antenna configuration...... leads to significant limitations on the network throughput in such deployments. In addition, network densification introduces difficulty in network deployment. This paper presents a study on the benefits of advanced receiver in a practical uncoordinated dense small cells deployment. Our aim is to show...... that advanced receivers can alleviate the need for detailed cell planning. To this end we adopt a hybrid simulation evaluation approach where propagation data are obtained from experimental analysis, and by which we analyse how MIMO constellation and network size impacts to the aim. The experimental data have...

  4. Distributed sensor networks

    Science.gov (United States)

    Lacoss, R. T.

    1985-09-01

    The Distributed Sensor Networks (DSN) program is aimed at developing and extending target surveillance and tracking technology in systems that employ multiple spatially distributed sensors and processing resources. Such a system would be made up of sensors, data bases, and processors distributed throughout an area and interconnected by an appropriate digital data communication system. The detection, tracking, and classification of low flying aircraft has been selected to develop and evaluate DSN concepts in the light of a specific system problem. A DSN test bed has been developed and is being used to test and demonstrate DSN techniques and technology. The overall concept calls for a mix of sensor types. The initial test-bed sensors are small arrays of microphones at each node augmented by TV sensors at some nodes. This Semiannual Technical Summary (SATS) reports results for the period 1 October 1984 through 31 March 1985. Progress in the development of distributed tracking algorithms and their implementation in the DSN test-bed system is reviewed in Section II. Test-bed versions of distributed acoustic tracking algorithms now have been implemented and tested using simulated acoustic data. This required developing a solution to a basic distributed tracking problem: the information feedback problem. Target tracks received by one node from another node often implicitly include information that originally was obtained from the receiving node.

  5. Network condition simulator for benchmarking sewer deterioration models.

    Science.gov (United States)

    Scheidegger, A; Hug, T; Rieckermann, J; Maurer, M

    2011-10-15

    An accurate description of aging and deterioration of urban drainage systems is necessary for optimal investment and rehabilitation planning. Due to a general lack of suitable datasets, network condition models are rarely validated, and if so with varying levels of success. We therefore propose a novel network condition simulator (NetCoS) that produces a synthetic population of sewer sections with a given condition-class distribution. NetCoS can be used to benchmark deterioration models and guide utilities in the selection of appropriate models and data management strategies. The underlying probabilistic model considers three main processes: a) deterioration, b) replacement policy, and c) expansions of the sewer network. The deterioration model features a semi-Markov chain that uses transition probabilities based on user-defined survival functions. The replacement policy is approximated with a condition-class dependent probability of replacing a sewer pipe. The model then simulates the course of the sewer sections from the installation of the first line to the present, adding new pipes based on the defined replacement and expansion program. We demonstrate the usefulness of NetCoS in two examples where we quantify the influence of incomplete data and inspection frequency on the parameter estimation of a cohort survival model and a Markov deterioration model. Our results show that typical available sewer inventory data with discarded historical data overestimate the average life expectancy by up to 200 years. Although NetCoS cannot prove the validity of a particular deterioration model, it is useful to reveal its possible limitations and shortcomings and quantifies the effects of missing or uncertain data. Future developments should include additional processes, for example to investigate the long-term effect of pipe rehabilitation measures, such as inliners. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. SCaN Testbed Software Development and Lessons Learned

    Science.gov (United States)

    Kacpura, Thomas J.; Varga, Denise M.

    2012-01-01

    National Aeronautics and Space Administration (NASA) has developed an on-orbit, adaptable, Software Defined Radio (SDR)Space Telecommunications Radio System (STRS)-based testbed facility to conduct a suite of experiments to advance technologies, reduce risk, and enable future mission capabilities on the International Space Station (ISS). The SCAN Testbed Project will provide NASA, industry, other Government agencies, and academic partners the opportunity to develop and field communications, navigation, and networking technologies in the laboratory and space environment based on reconfigurable, SDR platforms and the STRS Architecture.The SDRs are a new technology for NASA, and the support infrastructure they require is different from legacy, fixed function radios. SDRs offer the ability to reconfigure on-orbit communications by changing software for new waveforms and operating systems to enable new capabilities or fix any anomalies, which was not a previous option. They are not stand alone devices, but required a new approach to effectively control them and flow data. This requires extensive software to be developed to utilize the full potential of these reconfigurable platforms. The paper focuses on development, integration and testing as related to the avionics processor system, and the software required to command, control, monitor, and interact with the SDRs, as well as the other communication payload elements. An extensive effort was required to develop the flight software and meet the NASA requirements for software quality and safety. The flight avionics must be radiation tolerant, and these processors have limited capability in comparison to terrestrial counterparts. A big challenge was that there are three SDRs onboard, and interfacing with multiple SDRs simultaneously complicatesd the effort. The effort also includes ground software, which is a key element for both the command of the payload, and displaying data created by the payload. The verification of

  7. Climate and change: simulating flooding impacts on urban transport network

    Science.gov (United States)

    Pregnolato, Maria; Ford, Alistair; Dawson, Richard

    2015-04-01

    National-scale climate projections indicate that in the future there will be hotter and drier summers, warmer and wetter winters, together with rising sea levels. The frequency of extreme weather events is expected to increase, causing severe damage to the built environment and disruption of infrastructures (Dawson, 2007), whilst population growth and changed demographics are placing new demands on urban infrastructure. It is therefore essential to ensure infrastructure networks are robust to these changes. This research addresses these challenges by focussing on the development of probabilistic tools for managing risk by modelling urban transport networks within the context of extreme weather events. This paper presents a methodology to investigate the impacts of extreme weather events on urban environment, in particular infrastructure networks, through a combination of climate simulations and spatial representations. By overlaying spatial data on hazard thresholds from a flood model and a flood safety function, mitigated by potential adaptation strategies, different levels of disruption to commuting journeys on road networks are evaluated. The method follows the Catastrophe Modelling approach and it consists of a spatial model, combining deterministic loss models and probabilistic risk assessment techniques. It can be applied to present conditions as well as future uncertain scenarios, allowing the examination of the impacts alongside socio-economic and climate changes. The hazard is determined by simulating free surface water flooding, with the software CityCAT (Glenis et al., 2013). The outputs are overlapped to the spatial locations of a simple network model in GIS, which uses journey-to-work (JTW) observations, supplemented with speed and capacity information. To calculate the disruptive effect of flooding on transport networks, a function relating water depth to safe driving car speed has been developed by combining data from experimental reports (Morris et

  8. Neural network simulation of the industrial producer price index dynamical series

    OpenAIRE

    Soshnikov, L. E.

    2013-01-01

    This paper is devoted the simulation and forecast of dynamical series of the economical indicators. Multilayer perceptron and Radial basis function neural networks have been used. The neural networks model results are compared with the econometrical modeling.

  9. Building a Desktop Search Test-Bed

    NARCIS (Netherlands)

    Chernov, Sergey; Serdyukov, Pavel; Chirita, Paul-Alexandru; Demartini, Gianluca; Nejdl, Wolfgang

    In the last years several top-quality papers utilized temporary Desktop data and/or browsing activity logs for experimental evaluation. Building a common testbed for the Personal Information Management community is thus becoming an indispensable task. In this paper we present a possible dataset

  10. A Laboratory Testbed for Embedded Fuzzy Control

    Science.gov (United States)

    Srivastava, S.; Sukumar, V.; Bhasin, P. S.; Arun Kumar, D.

    2011-01-01

    This paper presents a novel scheme called "Laboratory Testbed for Embedded Fuzzy Control of a Real Time Nonlinear System." The idea is based upon the fact that project-based learning motivates students to learn actively and to use their engineering skills acquired in their previous years of study. It also fosters initiative and focuses…

  11. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2017-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  12. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2018-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  13. A Business-to-Business Interoperability Testbed: An Overview

    Energy Technology Data Exchange (ETDEWEB)

    Kulvatunyou, Boonserm [ORNL; Ivezic, Nenad [ORNL; Monica, Martin [Sun Microsystems, Inc.; Jones, Albert [National Institute of Standards and Technology (NIST)

    2003-10-01

    In this paper, we describe a business-to-business (B2B) testbed co-sponsored by the Open Applications Group, Inc. (OAGI) and the National Institute of Standard and Technology (NIST) to advance enterprise e-commerce standards. We describe the business and technical objectives and initial activities within the B2B Testbed. We summarize our initial lessons learned to form the requirements that drive the next generation testbed development. We also give an overview of a promising testing framework architecture in which to drive the testbed developments. We outline the future plans for the testbed development.

  14. Experimental Demonstration of a Cognitive Optical Network for Reduction of Restoration Time

    DEFF Research Database (Denmark)

    Kachris, Christoforos; Klonidis, Dimitris; Francescon, Antonio

    2014-01-01

    This paper presents the implementation and performance evaluation of a cognitive heterogeneous optical network testbed. The testbed integrates the CMP, the data plane and the cognitive system and reduces by 48% the link restoration time.......This paper presents the implementation and performance evaluation of a cognitive heterogeneous optical network testbed. The testbed integrates the CMP, the data plane and the cognitive system and reduces by 48% the link restoration time....

  15. USE OF NEURAL NETWORK SIMULATION TO MONITOR PATIENTS UNDERGOING RADICAL PROSTATECTOMY

    National Research Council Canada - National Science Library

    I. V. Lukyanov; N. A. Demchenko

    2014-01-01

    .... Based on neural network simulation, the Department of Urology, Russian Medical Academy of Postgraduate Education, has developed an accounting prognostic system to monitor the postoperative course...

  16. A simulation study of TaMAC protocol using network simulator 2.

    Science.gov (United States)

    Ullah, Sana; Kwak, Kyung Sup

    2012-10-01

    A Wireless Body Area Network (WBAN) is expected to play a significant role in future healthcare system. It interconnects low-cost and intelligent sensor nodes in, on, or around a human body to serve a variety of medical applications. It can be used to diagnose and treat patients with chronic diseases such as hypertensions, diabetes, and cardiovascular diseases. The lightweight sensor nodes integrated in WBAN require low-power operation, which can be achieved using different optimization techniques. We introduce a Traffic-adaptive MAC protocol (TaMAC) for WBAN that supports dual wakeup mechanisms for normal, emergency, and on-demand traffic. In this letter, the TaMAC protocol is simulated using a well-known Network Simulator 2 (NS-2). The problem of multiple emergency nodes is solved using both wakeup radio and CSMA/CA protocol. The power consumption, delay, and throughput performance are closely compared with beacon-enabled IEEE 802.15.4 MAC protocol using extensive simulations.

  17. Design of the Dual Conjugate Adaptive Optics Test-bed

    Science.gov (United States)

    Sharf, Inna; Bell, K.; Crampton, D.; Fitzsimmons, J.; Herriot, Glen; Jolissaint, Laurent; Lee, B.; Richardson, H.; van der Kamp, D.; Veran, Jean-Pierre

    In this paper, we describe the Multi-Conjugate Adaptive Optics laboratory test-bed presently under construction at the University of Victoria, Canada. The test-bench will be used to support research in the performance of multi-conjugate adaptive optics, turbulence simulators, laser guide stars and miniaturizing adaptive optics. The main components of the test-bed include two micro-machined deformable mirrors, a tip-tilt mirror, four wavefront sensors, a source simulator, a dual-layer turbulence simulator, as well as computational and control hardware. The paper will describe in detail the opto-mechanical design of the adaptive optics module, the design of the hot-air turbulence generator and the configuration chosen for the source simulator. Below, we present a summary of these aspects of the bench. The optical and mechanical design of the test-bed has been largely driven by the particular choice of the deformable mirrors. These are continuous micro-machined mirrors manufactured by Boston Micromachines Corporation. They have a clear aperture of 3.3 mm and are deformed with 140 actuators arranged in a square grid. Although the mirrors have an open-loop bandwidth of 6.6 KHz, their shape can be updated at a sampling rate of 100 Hz. In our optical design, the mirrors are conjugated at 0km and 10 km in the atmosphere. A planar optical layout was achieved by using four off-axis paraboloids and several folding mirrors. These optics will be mounted on two solid blocks which can be aligned with respect to each other. The wavefront path design accommodates 3 monochromatic guide stars that can be placed at either 90 km or at infinity. The design relies on the natural separation of the beam into 3 parts because of differences in locations of the guide stars in the field of view. In total four wavefront sensors will be procured from Adaptive Optics Associates (AOA) or built in-house: three for the guide stars and the fourth to collect data from the science source output in

  18. Model and simulation of Krause model in dynamic open network

    Science.gov (United States)

    Zhu, Meixia; Xie, Guangqiang

    2017-08-01

    The construction of the concept of evolution is an effective way to reveal the formation of group consensus. This study is based on the modeling paradigm of the HK model (Hegsekmann-Krause). This paper analyzes the evolution of multi - agent opinion in dynamic open networks with member mobility. The results of the simulation show that when the number of agents is constant, the interval distribution of the initial distribution will affect the number of the final view, The greater the distribution of opinions, the more the number of views formed eventually; The trust threshold has a decisive effect on the number of views, and there is a negative correlation between the trust threshold and the number of opinions clusters. The higher the connectivity of the initial activity group, the more easily the subjective opinion in the evolution of opinion to achieve rapid convergence. The more open the network is more conducive to the unity of view, increase and reduce the number of agents will not affect the consistency of the group effect, but not conducive to stability.

  19. Single and Multiple UAV Cyber-Attack Simulation and Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Ahmad Y. Javaid

    2015-02-01

    Full Text Available Usage of ground, air and underwater unmanned vehicles (UGV, UAV and UUV has increased exponentially in the recent past with industries producing thousands of these unmanned vehicles every year.With the ongoing discussion of integration of UAVs in the US National Airspace, the need of a cost-effective way to verify the security and resilience of a group of communicating UAVs under attack has become very important. The answer to this need is a simulation testbed which can be used to simulate the UAV Network (UAVNet. One of these attempts is - UAVSim (Unmanned Aerial Vehicle Simulation testbed developed at the University of Toledo. It has the capability of simulating large UAV networks as well as small UAV networks with large number of attack nodes. In this paper, we analyse the performance of the simulation testbed for two attacks, targeting single and multiple UAVs. Traditional and generic computing resource available in a regular computer laboratory was used. Various evaluation results have been presented and analysed which suggest the suitability of UAVSim for UAVNet attack and swarm simulation applications.

  20. A Network Traffic Generator Model for Fast Network-on-Chip Simulation

    DEFF Research Database (Denmark)

    Mahadevan, Shankar; Angiolini, Frederico; Storgaard, Michael

    2005-01-01

    and effective Network-on-Chip (NoC) development and debugging environment. By capturing the type and the timestamp of communication events at the boundary of an IP core in a reference environment, the TG can subsequently emulate the core's communication behavior in different environments. Access patterns......For Systems-on-Chip (SoCs) development, a predominant part of the design time is the simulation time. Performance evaluation and design space exploration of such systems in bit- and cycle-true fashion is becoming prohibitive. We propose a traffic generation (TG) model that provides a fast...

  1. Network Traffic Generator Model for Fast Network-on-Chip Simulation

    DEFF Research Database (Denmark)

    Mahadevan, Shankar; Ang, Frederico; Olsen, Rasmus G.

    2008-01-01

    and effective Network-on-Chip (NoC) development and debugging environment. By capturing the type and the timestamp of communication events at the boundary of an IP core in a reference environment, the TG can subsequently emulate the core's communication behavior in different environments. Access patterns......For Systems-on-Chip (SoCs) development, a predominant part of the design time is the simulation time. Performance evaluation and design space exploration of such systems in bit- and cycle-true fashion is becoming prohibitive. We propose a traffic generation (TG) model that provides a fast...

  2. Simulation technologies in networking and communications selecting the best tool for the test

    CERN Document Server

    Pathan, Al-Sakib Khan; Khan, Shafiullah

    2014-01-01

    Simulation is a widely used mechanism for validating the theoretical models of networking and communication systems. Although the claims made based on simulations are considered to be reliable, how reliable they really are is best determined with real-world implementation trials.Simulation Technologies in Networking and Communications: Selecting the Best Tool for the Test addresses the spectrum of issues regarding the different mechanisms related to simulation technologies in networking and communications fields. Focusing on the practice of simulation testing instead of the theory, it presents

  3. Using elements of game engine architecture to simulate sensor networks for eldercare.

    Science.gov (United States)

    Godsey, Chad; Skubic, Marjorie

    2009-01-01

    When dealing with a real time sensor network, building test data with a known ground truth is a tedious and cumbersome task. In order to quickly build test data for such a network, a simulation solution is a viable option. Simulation environments have a close relationship with computer game environments, and therefore there is much to be learned from game engine design. In this paper, we present our vision for a simulated in-home sensor network and describe ongoing work on using elements of game engines for building the simulator. Validation results are included to show agreement on motion sensor simulation with the physical environment.

  4. statnet: Software Tools for the Representation, Visualization, Analysis and Simulation of Network Data

    Directory of Open Access Journals (Sweden)

    Mark S. Handcock

    2007-12-01

    Full Text Available statnet is a suite of software packages for statistical network analysis. The packages implement recent advances in network modeling based on exponential-family random graph models (ERGM. The components of the package provide a comprehensive framework for ERGM-based network modeling, including tools for model estimation, model evaluation, model-based network simulation, and network visualization. This broad functionality is powered by a central Markov chain Monte Carlo (MCMC algorithm. The coding is optimized for speed and robustness.

  5. Designing laboratory wind simulations using artificial neural networks

    Science.gov (United States)

    Križan, Josip; Gašparac, Goran; Kozmar, Hrvoje; Antonić, Oleg; Grisogono, Branko

    2015-05-01

    While experiments in boundary layer wind tunnels remain to be a major research tool in wind engineering and environmental aerodynamics, designing the modeling hardware required for a proper atmospheric boundary layer (ABL) simulation can be costly and time consuming. Hence, possibilities are sought to speed-up this process and make it more time-efficient. In this study, two artificial neural networks (ANNs) are developed to determine an optimal design of the Counihan hardware, i.e., castellated barrier wall, vortex generators, and surface roughness, in order to simulate the ABL flow developing above urban, suburban, and rural terrains, as previous ANN models were created for one terrain type only. A standard procedure is used in developing those two ANNs in order to further enhance best-practice possibilities rather than to improve existing ANN designing methodology. In total, experimental results obtained using 23 different hardware setups are used when creating ANNs. In those tests, basic barrier height, barrier castellation height, spacing density, and height of surface roughness elements are the parameters that were varied to create satisfactory ABL simulations. The first ANN was used for the estimation of mean wind velocity, turbulent Reynolds stress, turbulence intensity, and length scales, while the second one was used for the estimation of the power spectral density of velocity fluctuations. This extensive set of studied flow and turbulence parameters is unmatched in comparison to the previous relevant studies, as it includes here turbulence intensity and power spectral density of velocity fluctuations in all three directions, as well as the Reynolds stress profiles and turbulence length scales. Modeling results agree well with experiments for all terrain types, particularly in the lower ABL within the height range of the most engineering structures, while exhibiting sensitivity to abrupt changes and data scattering in profiles of wind-tunnel results. The

  6. The Virtual Brain: a simulator of primate brain network dynamics.

    Science.gov (United States)

    Sanz Leon, Paula; Knock, Stuart A; Woodman, M Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor

    2013-01-01

    We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications.

  7. Evaluation Study of a Wireless Multimedia Traffic-Oriented Network Model

    Science.gov (United States)

    Vasiliadis, D. C.; Rizos, G. E.; Vassilakis, C.

    2008-11-01

    In this paper, a wireless multimedia traffic-oriented network scheme over a fourth generation system (4-G) is presented and analyzed. We conducted an extensive evaluation study for various mobility configurations in order to incorporate the behavior of the IEEE 802.11b standard over a test-bed wireless multimedia network model. In this context, the Quality of Services (QoS) over this network is vital for providing a reliable high-bandwidth platform for data-intensive sources like video streaming. Therefore, the main issues concerned in terms of QoS were the metrics for bandwidth of both dropped and lost packets and their mean packet delay under various traffic conditions. Finally, we used a generic distance-vector routing protocol which was based on an implementation of Distributed Bellman-Ford algorithm. The performance of the test-bed network model has been evaluated by using the simulation environment of NS-2.

  8. Data Distribution Service-Based Interoperability Framework for Smart Grid Testbed Infrastructure

    Directory of Open Access Journals (Sweden)

    Tarek A. Youssef

    2016-03-01

    Full Text Available This paper presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discovery feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS.

  9. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON.

    Science.gov (United States)

    Lytton, William W; Seidenstein, Alexandra H; Dura-Bernal, Salvador; McDougal, Robert A; Schürmann, Felix; Hines, Michael L

    2016-10-01

    Large multiscale neuronal network simulations are of increasing value as more big data are gathered about brain wiring and organization under the auspices of a current major research initiative, such as Brain Research through Advancing Innovative Neurotechnologies. The development of these models requires new simulation technologies. We describe here the current use of the NEURON simulator with message passing interface (MPI) for simulation in the domain of moderately large networks on commonly available high-performance computers (HPCs). We discuss the basic layout of such simulations, including the methods of simulation setup, the run-time spike-passing paradigm, and postsimulation data storage and data management approaches. Using the Neuroscience Gateway, a portal for computational neuroscience that provides access to large HPCs, we benchmark simulations of neuronal networks of different sizes (500-100,000 cells), and using different numbers of nodes (1-256). We compare three types of networks, composed of either Izhikevich integrate-and-fire neurons (I&F), single-compartment Hodgkin-Huxley (HH) cells, or a hybrid network with half of each. Results show simulation run time increased approximately linearly with network size and decreased almost linearly with the number of nodes. Networks with I&F neurons were faster than HH networks, although differences were small since all tested cells were point neurons with a single compartment.

  10. Discrimination of Cylinders with Different Wall Thicknesses using Neural Networks and Simulated Dolphin Sonar Signals

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whitlow; Larsen, Jan

    1999-01-01

    This paper describes a method integrating neural networks into a system for recognizing underwater objects. The system is based on a combination of simulated dolphin sonar signals, simulated auditory filters and artificial neural networks. The system is tested on a cylinder wall thickness...

  11. Variable Dynamic Testbed Vehicle: Dynamics Analysis

    Science.gov (United States)

    Lee, A. Y.; Le, N. T.; Marriott, A. T.

    1997-01-01

    The Variable Dynamic Testbed Vehicle (VDTV) concept has been proposed as a tool to evaluate collision avoidance systems and to perform driving-related human factors research. The goal of this study is to analytically investigate to what extent a VDTV with adjustable front and rear anti-roll bar stiffnesses, programmable damping rates, and four-wheel-steering can emulate the lateral dynamics of a broad range of passenger vehicles.

  12. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    Science.gov (United States)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  13. Understanding the Dynamics of MOOC Discussion Forums with Simulation Investigation for Empirical Network Analysis (SIENA)

    Science.gov (United States)

    Zhang, Jingjing; Skryabin, Maxim; Song, Xiongwei

    2016-01-01

    This study attempts to make inferences about the mechanisms that drive network change over time. It adopts simulation investigation for empirical network analysis to examine the patterns and evolution of relationships formed in the context of a massive open online course (MOOC) discussion forum. Four network effects--"homophily,"…

  14. Linking Simulation with Formal Verification and Modeling of Wireless Sensor Network in TLA+

    Science.gov (United States)

    Martyna, Jerzy

    In this paper, we present the results of the simulation of a wireless sensor network based on the flooding technique and SPIN protocols. The wireless sensor network was specified and verified by means of the TLA+ specification language [1]. For a model of wireless sensor network built this way simulation was carried with the help of specially constructed software tools. The obtained results allow us to predict the behaviour of the wireless sensor network in various topologies and spatial densities. Visualization of the output data enable precise examination of some phenomenas in wireless sensor networks, such as a hidden terminal, etc.

  15. Modelling Altitude Information in Two-Dimensional Traffic Networks for Electric Mobility Simulation

    Directory of Open Access Journals (Sweden)

    Diogo Santos

    2016-06-01

    Full Text Available Elevation data is important for electric vehicle simulation. However, traffic simulators are often two-dimensional and do not offer the capability of modelling urban networks taking elevation into account. Specifically, SUMO - Simulation of Urban Mobility, a popular microscopic traffic simulator, relies on networks previously modelled with elevation data as to provide this information during simulations. This work tackles the problem of adding elevation data to urban network models - particularly for the case of the Porto urban network, in Portugal. With this goal in mind, a comparison between different altitude information retrieval approaches is made and a simple tool to annotate network models with altitude data is proposed. The work starts by describing the methodological approach followed during research and development, then describing and analysing its main findings. This description includes an in-depth explanation of the proposed tool. Lastly, this work reviews some related work to the subject.

  16. Molecular Dynamics Simulations of Polymer Networks Undergoing Sequential Cross-Linking and Scission Reactions

    DEFF Research Database (Denmark)

    Rottach, Dana R.; Curro, John G.; Budzien, Joanne

    2007-01-01

    The effects of sequential cross-linking and scission of polymer networks formed in two states of strain are investigated using molecular dynamics simulations. Two-stage networks are studied in which a network formed in the unstrained state (stage 1) undergoes additional cross-linking in a uniaxia......The effects of sequential cross-linking and scission of polymer networks formed in two states of strain are investigated using molecular dynamics simulations. Two-stage networks are studied in which a network formed in the unstrained state (stage 1) undergoes additional cross......, a fraction (quantified by the stress transfer function ) of the second-stage cross-links contribute to the effective first-stage cross-link density. The stress transfer functions extracted from the MD simulations of the reacting networks are found to be in very...

  17. Test-bed Assessment of Communication Technologies for a Power-Balancing Controller

    DEFF Research Database (Denmark)

    Findrik, Mislav; Pedersen, Rasmus; Hasenleithner, Eduard

    2016-01-01

    and control. In this paper, we present a Smart Grid test-bed that integrates various communication technologies and deploys a power balancing controller for LV grids. Control performance of the introduced power balancing controller is subsequently investigated and its robustness to communication network cross......Due to growing need for sustainable energy, increasing number of different renewable energy resources are being connected into distribution grids. In order to efficiently manage a decentralized power generation units, the smart grid will rely on communication networks for information exchange......-traffic is evaluated. Various scenarios are demonstrated, assessing impact of communication network performance on quality of control....

  18. Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2015-01-07

    Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.

  19. CMS Test of the European DataGrid Testbed

    CERN Document Server

    Biasotto, Massimo; Capiluppi, Paolo; Charlot, Claude; Colling, David; MacEvoy, Barry C; Tallini, Hugh; Corvo, Marco; Fanzago, Federica; Verlato, Marco; Fanfani, Alessandra; Fantinel, Sergio; Gaillac, Anne-Marie; Grandi, Claudio; Augustin, I; Lefébure, Véronique; Stockinger, Heinz; Maroney, Owen; Nebrensky, H; Semeniouk, Igor; Blaising, J J; Burke, Samuel; Chierici, A; Cavalli, A; Ciaschini, V; Field, L; Groep, D; Hernández, F; Italiano, A; Kunszt, Peter Z; Lajili, N; Laure, Erwin; Leonardi, Emanuele; Loomis, C; Prelz, F; Reale, M; Schulz, M; Sciabà, Andrea; Sgaravatto, Massimo; Templon, J A; Tortone, G

    2003-01-01

    Starting in the middle of November 2002, the CMS experiment undertook an evaluation of the European, DataGrid Project (EDG) middleware using its event simulation programs. A joint CMS-EDG task force performed a iestress testle by submitting a large number of jobs to many distributed sites. The EDG testbed was complemented with additional CMS-dedicated resources. A total of ~ 10000 jobs consisting of two different computational types were submitted from four different locations in Europe over a period of about one month. Nine sites were active, providing integrated resources of more than 500 CPUs and about 5 TB of disk space (with the additional use of two Mass Storage Systems). Detailed descriptions of the adopted procedures, the problems encountered and the corresponding solutions are reported. Results and evaluations of the test, both from the CMS and the EDG perspectives, are described (Version 2).

  20. Modular, Rapid Propellant Loading System/Cryogenic Testbed

    Science.gov (United States)

    Hatfield, Walter, Sr.; Jumper, Kevin

    2012-01-01

    The Cryogenic Test Laboratory (CTL) at Kennedy Space Center (KSC) has designed, fabricated, and installed a modular, rapid propellant-loading system to simulate rapid loading of a launch-vehicle composite or standard cryogenic tank. The system will also function as a cryogenic testbed for testing and validating cryogenic innovations and ground support equipment (GSE) components. The modular skid-mounted system is capable of flow rates of liquid nitrogen from 1 to 900 gpm (approx equals 3.8 to 3,400 L/min), of pressures from ambient to 225 psig (approx equals 1.5 MPa), and of temperatures to -320 F (approx equals -195 C). The system can be easily validated to flow liquid oxygen at a different location, and could be easily scaled to any particular vehicle interface requirements

  1. Queueing Models and Stability of Message Flows in Distributed Simulators of Open Queueing Networks

    OpenAIRE

    Gupta, Manish; Kumar, Anurag; Shorey, Rajeev

    1996-01-01

    In this paper we study message flow processes in distributed simulators of open queueing networks. We develop and study queueing models for distributed simulators with maximum lookahead sequencing. We characterize the external arrival process, and the message feedback process in the simulator of a simple queueing network with feedback. We show that a certain natural modelling construct for the arrival process is exactly correct, whereas an obvious model for the feedback process is wrong; we t...

  2. Testbed for remote telepresence research

    Science.gov (United States)

    Adnan, Sarmad; Cheatham, John B.

    1992-01-01

    The paper describes the design and implementation of a telepresence system, as well as its control hardware and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free-flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three-DOF platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.

  3. Imagining the future: The core episodic simulation network dissociates as a function of timecourse and the amount of simulated information.

    Science.gov (United States)

    Thakral, Preston P; Benoit, Roland G; Schacter, Daniel L

    2017-05-01

    Neuroimaging data indicate that episodic memory (i.e., remembering specific past experiences) and episodic simulation (i.e., imagining specific future experiences) are associated with enhanced activity in a common set of neural regions, often referred to as the core network. This network comprises the hippocampus, parahippocampal cortex, lateral and medial parietal cortex, lateral temporal cortex, and medial prefrontal cortex. Evidence for a core network has been taken as support for the idea that episodic memory and episodic simulation are supported by common processes. Much remains to be learned about how specific core network regions contribute to specific aspects of episodic simulation. Prior neuroimaging studies of episodic memory indicate that certain regions within the core network are differentially sensitive to the amount of information recollected (e.g., the left lateral parietal cortex). In addition, certain core network regions dissociate as a function of their timecourse of engagement during episodic memory (e.g., transient activity in the posterior hippocampus and sustained activity in the left lateral parietal cortex). In the current study, we assessed whether similar dissociations could be observed during episodic simulation. We found that the left lateral parietal cortex modulates as a function of the amount of simulated details. Of particular interest, while the hippocampus was insensitive to the amount of simulated details, we observed a temporal dissociation within the hippocampus: transient activity occurred in relatively posterior portions of the hippocampus and sustained activity occurred in anterior portions. Because the posterior hippocampal and lateral parietal findings parallel those observed during episodic memory, the present results add to the evidence that episodic memory and episodic simulation are supported by common processes. Critically, the present study also provides evidence that regions within the core network support

  4. A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors.

    Science.gov (United States)

    Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L; Nicolau, Alex; Veidenbaum, Alexander V

    2009-01-01

    Neural network simulators that take into account the spiking behavior of neurons are useful for studying brain mechanisms and for various neural engineering applications. Spiking Neural Network (SNN) simulators have been traditionally simulated on large-scale clusters, super-computers, or on dedicated hardware architectures. Alternatively, Compute Unified Device Architecture (CUDA) Graphics Processing Units (GPUs) can provide a low-cost, programmable, and high-performance computing platform for simulation of SNNs. In this paper we demonstrate an efficient, biologically realistic, large-scale SNN simulator that runs on a single GPU. The SNN model includes Izhikevich spiking neurons, detailed models of synaptic plasticity and variable axonal delay. We allow user-defined configuration of the GPU-SNN model by means of a high-level programming interface written in C++ but similar to the PyNN programming interface specification. PyNN is a common programming interface developed by the neuronal simulation community to allow a single script to run on various simulators. The GPU implementation (on NVIDIA GTX-280 with 1 GB of memory) is up to 26 times faster than a CPU version for the simulation of 100K neurons with 50 Million synaptic connections, firing at an average rate of 7 Hz. For simulation of 10 Million synaptic connections and 100K neurons, the GPU SNN model is only 1.5 times slower than real-time. Further, we present a collection of new techniques related to parallelism extraction, mapping of irregular communication, and network representation for effective simulation of SNNs on GPUs. The fidelity of the simulation results was validated on CPU simulations using firing rate, synaptic weight distribution, and inter-spike interval analysis. Our simulator is publicly available to the modeling community so that researchers will have easy access to large-scale SNN simulations.

  5. Modeling and Simulation of Handover Scheme in Integrated EPON-WiMAX Networks

    DEFF Research Database (Denmark)

    Yan, Ying; Dittmann, Lars

    2011-01-01

    In this paper, we tackle the seamless handover problem in integrated optical wireless networks. Our model applies for the convergence network of EPON and WiMAX and a mobilityaware signaling protocol is proposed. The proposed handover scheme, Integrated Mobility Management Scheme (IMMS), is assisted...... by enhancing the traditional MPCP signaling protocol, which cooperatively collects mobility information from the front-end wireless network and makes centralized bandwidth allocation decisions in the backhaul optical network. The integrated network architecture and the joint handover scheme are simulated using...... OPNET modeler. Results show validation of the protocol, i.e., integrated handover scheme gains better network performances....

  6. Stochastic simulation of HIV population dynamics through complex network modelling

    NARCIS (Netherlands)

    Sloot, P. M. A.; Ivanov, S. V.; Boukhanovsky, A. V.; van de Vijver, D. A. M. C.; Boucher, C. A. B.

    We propose a new way to model HIV infection spreading through the use of dynamic complex networks. The heterogeneous population of HIV exposure groups is described through a unique network degree probability distribution. The time evolution of the network nodes is modelled by a Markov process and

  7. Stochastic simulation of HIV population dynamics through complex network modelling

    NARCIS (Netherlands)

    Sloot, P.M.A.; Ivanov, S.V.; Boukhanovsky, A.V.; van de Vijver, D.A.M.C.; Boucher, C.A.B.

    2008-01-01

    We propose a new way to model HIV infection spreading through the use of dynamic complex networks. The heterogeneous population of HIV exposure groups is described through a unique network degree probability distribution. The time evolution of the network nodes is modelled by a Markov process and

  8. Validation of Mobility Simulations via Measurement Drive Tests in an Operational Network

    DEFF Research Database (Denmark)

    Gimenez, Lucas Chavarria; Barbera, Simone; Polignano, Michele

    2015-01-01

    Simulations play a key role in validating new concepts in cellular networks, since most of the features proposed and introduced into the standards are typically first studied by means of simulations. In order to increase the trustworthiness of the simulation results, proper models and settings must...... to reality. The presented study is based on drive tests measurements and explicit simulations of an operator network in the city of Aalborg (Denmark) – modelling a real 3D environment and using a commonly accepted dynamic system level simulation methodology. In short, the presented results show...

  9. Application of the Semi-Empirical Force-Limiting Approach for the CoNNeCT SCAN Testbed

    Science.gov (United States)

    Staab, Lucas D.; McNelis, Mark E.; Akers, James C.; Suarez, Vicente J.; Jones, Trevor M.

    2012-01-01

    The semi-empirical force-limiting vibration method was developed and implemented for payload testing to limit the structural impedance mismatch (high force) that occurs during shaker vibration testing. The method has since been extended for use in analytical models. The Space Communications and Navigation Testbed (SCAN Testbed), known at NASA as, the Communications, Navigation, and Networking re-Configurable Testbed (CoNNeCT), project utilized force-limiting testing and analysis following the semi-empirical approach. This paper presents the steps in performing a force-limiting analysis and then compares the results to test data recovered during the CoNNeCT force-limiting random vibration qualification test that took place at NASA Glenn Research Center (GRC) in the Structural Dynamics Laboratory (SDL) December 19, 2010 to January 7, 2011. A compilation of lessons learned and considerations for future force-limiting tests is also included.

  10. How Crime Spreads Through Imitation in Social Networks: A Simulation Model

    Science.gov (United States)

    Punzo, Valentina

    In this chapter an agent-based model for investigating how crime spreads through social networks is presented. Some theoretical issues related to the sociological explanation of crime are tested through simulation. The agent-based simulation allows us to investigate the relative impact of some mechanisms of social influence on crime, within a set of controlled simulated experiments.

  11. Experimental Evaluation of Simulation Abstractions for Wireless Sensor Network MAC Protocols

    NARCIS (Netherlands)

    Halkes, G.P.; Langendoen, K.G.

    2010-01-01

    The evaluation ofMAC protocols forWireless Sensor Networks (WSNs) is often performed through simulation. These simulations necessarily abstract away from reality inmany ways. However, the impact of these abstractions on the results of the simulations has received only limited attention. Moreover,

  12. Hybrid Network Simulation for the ATLAS Trigger and Data Acquisition (TDAQ) System

    CERN Document Server

    Bonaventura, Matias Alejandro; The ATLAS collaboration; Castro, Rodrigo Daniel; Foguelman, Daniel Jacob

    2015-01-01

    The poster shows the ongoing research in the ATLAS TDAQ group in collaboration with the University of Buenos Aires in the area of hybrid data network simulations. he Data Network and Processing Cluster filters data in real-time, achieving a rejection factor in the order of 40000x and has real-time latency constrains. The dataflow between the processing units (TPUs) and Readout System (ROS) presents a “TCP Incast”-type network pathology which TCP cannot handle it efficiently. A credits system is in place which limits rate of queries and reduces latency. This large computer network, and the complex dataflow has been modelled and simulated using a PowerDEVS, a DEVS-based simulator. The simulation has been validated and used to produce what-if scenarios in the real network. Network Simulation with Hybrid Flows: Speedups and accuracy, combined • For intensive network traffic, Discrete Event simulation models (packet-level granularity) soon becomes prohibitive: Too high computing demands. • Fluid Flow simul...

  13. An Aircraft Electric Power Testbed for Validating Automatically Synthesized Reactive Control Protocols

    Science.gov (United States)

    2013-01-01

    we describe our recently developed simulation models and a hardware testbed for validating reactive controllers synthesized using TuLiP [1], a...temporal logic planning toolbox, in order to investigate the validity of the assumptions made in controller synthesis. TuLiP is a collection of Python... TuLiP can be used to synthesize logic so that the satisfaction of certain safety requirements is guaranteed. The synthesized logic enables the contac

  14. Designing and Building SDN Testbeds for Energy-Aware Traffic Engineering Services

    OpenAIRE

    Dias De Assuncao, Marcos; Carpa, Radu; Lefèvre, Laurent; Glück, Olivier; Borylo, Piotr; Lason, Artur; Szymanski, Andrzej; Rzepka, Michal

    2017-01-01

    International audience; As experimenting with energy-aware techniques on large-scale production infrastructure is prohibitive , a large number of proposed traffic-engineering strategies have been evaluated only using discrete-event simulations. The present work discusses (i) challenges towards building testbeds that allow researchers and practitioners to validate and evaluate the performance and quality of energy-aware traffic-engineering strategies , (ii) requirements to fulfill when porting...

  15. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    Science.gov (United States)

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  16. The layered sensing operations center: a modeling and simulation approach to developing complex ISR networks

    Science.gov (United States)

    Curtis, Christopher; Lenzo, Matthew; McClure, Matthew; Preiss, Bruce

    2010-04-01

    In order to anticipate the constantly changing landscape of global warfare, the United States Air Force must acquire new capabilities in the field of Intelligence, Surveillance, and Reconnaissance (ISR). To meet this challenge, the Air Force Research Laboratory (AFRL) is developing a unifying construct of "Layered Sensing" which will provide military decision-makers at all levels with the timely, actionable, and trusted information necessary for complete battlespace awareness. Layered Sensing is characterized by the appropriate combination of sensors and platforms (including those for persistent sensing), infrastructure, and exploitation capabilities to enable this synergistic awareness. To achieve the Layered Sensing vision, AFRL is pursuing a Modeling & Simulation (M&S) strategy through the Layered Sensing Operations Center (LSOC). An experimental ISR system-of-systems test-bed, the LSOC integrates DoD standard simulation tools with commercial, off-the-shelf video game technology for rapid scenario development and visualization. These tools will help facilitate sensor management performance characterization, system development, and operator behavioral analysis. Flexible and cost-effective, the LSOC will implement a non-proprietary, open-architecture framework with well-defined interfaces. This framework will incentivize the transition of current ISR performance models to service-oriented software design for maximum re-use and consistency. This paper will present the LSOC's development and implementation thus far as well as a summary of lessons learned and future plans for the LSOC.

  17. On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues.

    Science.gov (United States)

    Wang, Wei; Huang, Li; Liang, Xuedong

    2018-01-06

    This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks' statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies.

  18. The role of simulation in the design of a neural network chip

    Science.gov (United States)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  19. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    Science.gov (United States)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  20. MRMS Experimental Testbed for Operational Products (METOP)

    Science.gov (United States)

    Zhang, J.

    2016-12-01

    Accurate high-resolution quantitative precipitation estimation (QPE) at the continental scale is of critical importance to the nation's weather, water and climate services. To address this need, a Multi-Radar Multi-Sensor (MRMS) system was developed at the National Severe Storms Lab of National Oceanic and Atmospheric Administration that integrates radar, gauge, model and satellite data and provides a suite of QPE products at 1-km and 2-min resolution. MRMS system consists of three components: 1) an operational system; 2) a real-time research system; 3) an archive testbed. The operational system currently provides instantaneous precipitation rate, type and 1- to 72-hr accumulations for conterminous United Stated and southern Canada. The research system has the similar hardware infrastructure and data environment as the operational system, but runs newer and more advanced algorithms. The newer algorithms are tested on the research system for robustness and computational efficiency in a pseudo operational environment before they are transitioned into operations. The archive testbed, also called the MRMS Experimental Testbed for Operational Products (METOP), consists of a large database that encompasses a wide range of hydroclimatological and geographical regimes. METOP is for the testing and refinements of the most advanced radar QPE techniques, which are often developed on specific data from limited times and locations. The archive data includes quality controlled in-situ observations for the validation of the new radar QPE across all seasons and geographic regions. A number of operational QPE products derived from different sensors/models are also included in METOP for the fusion of multiple sources of complementary precipitation information. This paper is an introduction of the METOP system.

  1. Demo III: Department of Defense testbed for unmanned ground mobility

    Science.gov (United States)

    Shoemaker, Chuck M.; Bornstein, Jonathan A.; Myers, Scott D.; Brendle, Bruce E., Jr.

    1999-07-01

    Robotics has been identified by numerous recent Department of Defense (DOD) studies as a key enabling technology for future military operational concepts. The Demo III Program is a multiyear effort encompassing technology development and demonstration on testbed platforms, together with modeling simulation and experimentation directed toward optimization of operational concepts to employ this technology. Primary program focus is the advancement of capabilities for autonomous mobility through unstructured environments, concentrating on both perception and intelligent control technology. The scout mission will provide the military operational context for demonstration of this technology, although a significant emphasis is being placed upon both hardware and software modularity to permit rapid extension to other military missions. The Experimental Unmanned Vehicle (XUV) is a small (approximately 1150 kg, V-22 transportable) technology testbed vehicle designed for experimentation with multiple military operational concepts. Currently under development, the XUV is scheduled for roll-out in Summer 1999, with an initial troop experimentation to be conducted in September 1999. Though small, and relatively lightweight, modeling has shown the chassis capable of automotive mobility comparable to the current Army lightweight high-mobility, multipurpose, wheeled vehicle (HMMWV). The XUV design couples multisensor perception with intelligent control to permit autonomous cross-country navigation at speeds of up to 32 kph during daylight and 16 kph during hours of darkness. A small, lightweight, highly capable user interface will permit intuitive control of the XUV by troops from current-generation tactical vehicles. When it concludes in 2002, Demo III will provide the military with both the technology and the initial experience required to develop and field the first generation of semi-autonomous tactical ground vehicles for combat, combat support, and logistics applications.

  2. Application developer's tutorial for the CSM testbed architecture

    Science.gov (United States)

    Underwood, Phillip; Felippa, Carlos A.

    1988-01-01

    This tutorial serves as an illustration of the use of the programmer interface on the CSM Testbed Architecture (NICE). It presents a complete, but simple, introduction to using both the GAL-DBM (Global Access Library-Database Manager) and CLIP (Command Language Interface Program) to write a NICE processor. Familiarity with the CSM Testbed architecture is required.

  3. Credibility and validation of simulation models for tactical IP networks

    NARCIS (Netherlands)

    Boltjes, B.; Thiele, F.; Diaz, I.F.

    2007-01-01

    The task of TNO is to provide predictions of the scalability and performance of the new all-IP tactical networks of the Royal Netherlands Army (RNLA) that are likely to be fielded. The inherent properties of fielded tactical networks, such as low bandwidth and Quality of Service (QoS) policies

  4. Evaluation and Simulation of Common Video Conference Traffics in Communication Networks

    Directory of Open Access Journals (Sweden)

    Farhad faghani

    2014-01-01

    Full Text Available Multimedia traffics are the basic traffics in data communication networks. Especially Video conferences are the most desirable traffics in huge networks(wired, wireless, …. Traffic modeling can help us to evaluate the real networks. So, in order to have good services in data communication networks which provide multimedia services, QoS will be very important .In this research we tried to have an exact traffic model design and simulation to overcome QoS challenges. Also, we predict bandwidth by Kalman filter in Ethernet networks.

  5. Permanent Set of Cross-Linking Networks: Comparison of Theory with Molecular Dynamics Simulations

    DEFF Research Database (Denmark)

    Rottach, Dana R.; Curro, John G.; Budzien, Joanne

    2006-01-01

    The permanent set of cross-linking networks is studied by molecular dynamics. The uniaxial stress for a bead-spring polymer network is investigated as a function of strain and cross-link density history, where cross-links are introduced in unstrained and strained networks. The permanent set...... is found from the strain of the network after it returns to the state-of-ease where the stress is zero. The permanent set simulations are compared with theory using the independent network hypothesis, together with the various theoretical rubber elasticity theories: affine, phantom, constrained junction...

  6. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  7. Simulation of mixed switched-capacitor/digital networks with signal-driven switches

    Science.gov (United States)

    Suyama, Ken; Tsividis, Yannis P.; Fang, San-Chin

    1990-12-01

    The simulation of mixed switched-capacitor/digital (SC/D) networks containing capacitors, independent and linear-dependent voltage sources, switches controlled either by periodic or nonperiodic Boolean signals, latched comparators, and logic gates is considered. A unified linear switched-capacitor network (SCN) and mixed SC/D network simulator, SWITCAP2, and its applications to several widely used and novel nonlinear SCNs are discussed. The switches may be controlled by periodic waveforms and by nonperiodic waveforms from the outputs of comparators and logic gates. The signal-dependent modification of network topology through the comparators, logic gates, and signal-driven switches makes the modeling of various nonlinear switched-capacitor circuits possible. Simulation results for a pulse-code modulation (PCM) voice encoder, a sigma-delta modulator, a neural network, and a phase-locked loop (PLL) are presented to demonstrate the flexibility of the approach.

  8. Enterprise Networks for Competences Exchange: A Simulation Model

    Science.gov (United States)

    Remondino, Marco; Pironti, Marco; Pisano, Paola

    A business process is a set of logically related tasks performed to achieve a defined business and related to improving organizational processes. Process innovation can happen at various levels: incrementally, redesign of existing processes, new processes. The knowledge behind process innovation can be shared, acquired, changed and increased by the enterprises inside a network. An enterprise can decide to exploit innovative processes it owns, thus potentially gaining competitive advantage, but risking, in turn, that other players could reach the same technological levels. Or it could decide to share it, in exchange for other competencies or money. These activities could be the basis for a network formation and/or impact the topology of an existing network. In this work an agent based model is introduced (E3), aiming to explore how a process innovation can facilitate network formation, affect its topology, induce new players to enter the market and spread onto the network by being shared or developed by new players.

  9. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    Science.gov (United States)

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  10. Nuclear Instrumentation and Control Cyber Testbed Considerations – Lessons Learned

    Energy Technology Data Exchange (ETDEWEB)

    Jonathan Gray; Robert Anderson; Julio G. Rodriguez; Cheol-Kwon Lee

    2014-08-01

    Abstract: Identifying and understanding digital instrumentation and control (I&C) cyber vulnerabilities within nuclear power plants and other nuclear facilities, is critical if nation states desire to operate nuclear facilities safely, reliably, and securely. In order to demonstrate objective evidence that cyber vulnerabilities have been adequately identified and mitigated, a testbed representing a facility’s critical nuclear equipment must be replicated. Idaho National Laboratory (INL) has built and operated similar testbeds for common critical infrastructure I&C for over ten years. This experience developing, operating, and maintaining an I&C testbed in support of research identifying cyber vulnerabilities has led the Korean Atomic Energy Research Institute of the Republic of Korea to solicit the experiences of INL to help mitigate problems early in the design, development, operation, and maintenance of a similar testbed. The following information will discuss I&C testbed lessons learned and the impact of these experiences to KAERI.

  11. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    National Research Council Canada - National Science Library

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs...

  12. Accelerated Gillespie Algorithm for Gas–Grain Reaction Network Simulations Using Quasi-steady-state Assumption

    Science.gov (United States)

    Chang, Qiang; Lu, Yang; Quan, Donghui

    2017-12-01

    Although the Gillespie algorithm is accurate in simulating gas–grain reaction networks, so far its computational cost is so expensive that it cannot be used to simulate chemical reaction networks that include molecular hydrogen accretion or the chemical evolution of protoplanetary disks. We present an accelerated Gillespie algorithm that is based on a quasi-steady-state assumption with the further approximation that the population distribution of transient species depends only on the accretion and desorption processes. The new algorithm is tested against a few reaction networks that are simulated by the regular Gillespie algorithm. We found that the less likely it is that transient species are formed and destroyed on grain surfaces, the more accurate the new method is. We also apply the new method to simulate reaction networks that include molecular hydrogen accretion. The results show that surface chemical reactions involving molecular hydrogen are not important for the production of surface species under standard physical conditions of dense molecular clouds.

  13. ABCDecision: A Simulation Platform for Access Selection Algorithms in Heterogeneous Wireless Networks

    Directory of Open Access Journals (Sweden)

    Guy Pujolle

    2010-01-01

    Full Text Available We present a simulation platform for access selection algorithms in heterogeneous wireless networks, called “ABCDecision”. The simulator implements the different parts of an Always Best Connected (ABC system, including Access Technology Selector (ATS, Radio Access Networks (RANs, and users. After describing the architecture of the simulator, we show an overview of the existing decision algorithms for access selection. Then we propose a new selection algorithm in heterogeneous networks and we run a set of simulations to evaluate the performance of the proposed algorithm in comparison with the existing ones. The performance results, in terms of the occupancy rate, show that our algorithm achieves a load balancing distribution between networks by taking into consideration the capacities of the available cells.

  14. An Extended N-Player Network Game and Simulation of Four Investment Strategies on a Complex Innovation Network.

    Science.gov (United States)

    Zhou, Wen; Koptyug, Nikita; Ye, Shutao; Jia, Yifan; Lu, Xiaolong

    2016-01-01

    As computer science and complex network theory develop, non-cooperative games and their formation and application on complex networks have been important research topics. In the inter-firm innovation network, it is a typical game behavior for firms to invest in their alliance partners. Accounting for the possibility that firms can be resource constrained, this paper analyzes a coordination game using the Nash bargaining solution as allocation rules between firms in an inter-firm innovation network. We build an extended inter-firm n-player game based on nonidealized conditions, describe four investment strategies and simulate the strategies on an inter-firm innovation network in order to compare their performance. By analyzing the results of our experiments, we find that our proposed greedy strategy is the best-performing in most situations. We hope this study provides a theoretical insight into how firms make investment decisions.

  15. An Extended N-Player Network Game and Simulation of Four Investment Strategies on a Complex Innovation Network.

    Directory of Open Access Journals (Sweden)

    Wen Zhou

    Full Text Available As computer science and complex network theory develop, non-cooperative games and their formation and application on complex networks have been important research topics. In the inter-firm innovation network, it is a typical game behavior for firms to invest in their alliance partners. Accounting for the possibility that firms can be resource constrained, this paper analyzes a coordination game using the Nash bargaining solution as allocation rules between firms in an inter-firm innovation network. We build an extended inter-firm n-player game based on nonidealized conditions, describe four investment strategies and simulate the strategies on an inter-firm innovation network in order to compare their performance. By analyzing the results of our experiments, we find that our proposed greedy strategy is the best-performing in most situations. We hope this study provides a theoretical insight into how firms make investment decisions.

  16. An introduction to network modeling and simulation for the practicing engineer

    CERN Document Server

    Burbank, Jack; Ward, Jon

    2011-01-01

    This book provides the practicing engineer with a concise listing of commercial and open-source modeling and simulation tools currently available including examples of implementing those tools for solving specific Modeling and Simulation examples. Instead of focusing on the underlying theory of Modeling and Simulation and fundamental building blocks for custom simulations, this book compares platforms used in practice, and gives rules enabling the practicing engineer to utilize available Modeling and Simulation tools. This book will contain insights regarding common pitfalls in network Modeling and Simulation and practical methods for working engineers.

  17. Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole

    2014-01-01

    platform is designed and tested. The purpose of setting up the network is to reduce calculation time in a fatigue life analysis. Therefore, the networks trained on different error functions are compared with respect to accuracy of rain flow counts of stress cycles over a number of time series simulations......Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure...... for regression is the mean square error. This paper looks into the possibility of improving the performance of neural networks by selecting or defining error functions that are tailor-made for a specific objective. A neural network trained to simulate tension forces in an anchor chain on a floating offshore...

  18. Application of Neural Network and Simulation Modeling to Evaluate Russian Banks’ Performance

    OpenAIRE

    Sharma, Satish; Shebalkov, Mikhail

    2013-01-01

    This paper presents an application of neural network and simulation modeling to analyze and predict the performance of 883 Russian Banks over the period 2000-2010. Correlation analysis was performed to obtain key financial indicators which reflect the leverage, liquidity, profitability and size of Banks. Neural network was trained over the entire dataset, and then simulation modeling was performed generating values which are distributed with Largest Extreme Value and Loglogistic distributions...

  19. FNCS: A Framework for Power System and Communication Networks Co-Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.; Fisher, Andrew R.; Marinovici, Laurentiu D.; Agarwal, Khushbu

    2014-04-13

    This paper describes the Fenix framework that uses a federated approach for integrating power grid and communication network simulators. Compared existing approaches, Fenix al- lows co-simulation of both transmission and distribution level power grid simulators with the communication network sim- ulator. To reduce the performance overhead of time synchro- nization, Fenix utilizes optimistic synchronization strategies that make speculative decisions about when the simulators are going to exchange messages. GridLAB-D (a distribution simulator), PowerFlow (a transmission simulator), and ns-3 (a telecommunication simulator) are integrated with the frame- work and are used to illustrate the enhanced performance pro- vided by speculative multi-threading on a smart grid applica- tion. Our speculative multi-threading approach achieved on average 20% improvement over the existing synchronization methods

  20. Less Developed Countries Energy System Network Simulator, LDC-ESNS: a brief description

    Energy Technology Data Exchange (ETDEWEB)

    Reisman, A; Malone, R

    1978-04-01

    Prepared for the Brookhaven National Laboratory Developing Countries Energy Program, this report describes the Less Developed Countries Energy System Network Simulator (LDC-ESNS), a tool which provides a quantitative representation of the energy system of an LDC. The network structure of the energy supply and demand system, the model inputs and outputs, and the possible uses of the model for analysis are described.

  1. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Yi; Wang, Peng; Goel, Lalit [Nanyang Technological University, School of Electrical and Electronics Engineering, Block S1, Nanyang Avenue, Singapore 639798 (Singapore); Billinton, Roy; Karki, Rajesh [Department of Electrical Engineering, University of Saskatchewan, Saskatoon (Canada)

    2007-10-15

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  2. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    Science.gov (United States)

    Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus

    2015-01-01

    Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology. PMID:26441628

  3. Human metabolic network: reconstruction, simulation, and applications in systems biology.

    Science.gov (United States)

    Wu, Ming; Chan, Christina

    2012-03-02

    Metabolism is crucial to cell growth and proliferation. Deficiency or alterations in metabolic functions are known to be involved in many human diseases. Therefore, understanding the human metabolic system is important for the study and treatment of complex diseases. Current reconstructions of the global human metabolic network provide a computational platform to integrate genome-scale information on metabolism. The platform enables a systematic study of the regulation and is applicable to a wide variety of cases, wherein one could rely on in silico perturbations to predict novel targets, interpret systemic effects, and identify alterations in the metabolic states to better understand the genotype-phenotype relationships. In this review, we describe the reconstruction of the human metabolic network, introduce the constraint based modeling approach to analyze metabolic networks, and discuss systems biology applications to study human physiology and pathology. We highlight the challenges and opportunities in network reconstruction and systems modeling of the human metabolic system.

  4. Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.

    Science.gov (United States)

    Lee, Won Hee; Bullmore, Ed; Frangou, Sophia

    2017-02-01

    There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. How Network Properties Affect One's Ability to Obtain Benefits: A Network Simulation

    Science.gov (United States)

    Trefalt, Špela

    2014-01-01

    Networks and the social capital that they carry enable people to get things done, to prosper in their careers, and to feel supported. To develop an effective network, one needs to know more than how to make connections with strangers at a reception; understanding the consequences of network properties on one's ability to obtain benefits is…

  6. Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network.

    Science.gov (United States)

    Groth, Detlef

    2017-04-01

    Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later

  7. Analysis of critical operating conditions for LV distribution networks with microgrids

    Science.gov (United States)

    Zehir, M. A.; Batman, A.; Sonmez, M. A.; Font, A.; Tsiamitros, D.; Stimoniaris, D.; Kollatou, T.; Bagriyanik, M.; Ozdemir, A.; Dialynas, E.

    2016-11-01

    Increase in the penetration of Distributed Generation (DG) in distribution networks, raises the risk of voltage limit violations while contributing to line losses. Especially in low voltage (LV) distribution networks (secondary distribution networks), impacts of active power flows on the bus voltages and on the network losses are more dominant. As network operators must meet regulatory limitations, they have to take into account the most critical operating conditions in their systems. In this study, it is aimed to present the impact of the worst operation cases of LV distribution networks comprising microgrids. Simulation studies are performed on a field data-based virtual test-bed. The simulations are repeated for several cases consisting different microgrid points of connection with different network loading and microgrid supply/demand conditions.

  8. D-LiTE: A platform for evaluating DASH performance over a simulated LTE network

    OpenAIRE

    Quinlan, Jason J.; Raca, Darijo; Zahran, Ahmed H.; Khalid, Ahmed; Ramakrishnan, K. K.; Sreenan, Cormac J.

    2015-01-01

    In this demonstration we present a platform that encompasses all of the components required to realistically evaluate the performance of Dynamic Adaptive Streaming over HTTP (DASH) over a real-time NS-3 simulated network. Our platform consists of a network-attached storage server with DASH video clips and a simulated LTE network which utilises the NS-3 LTE module provided by the LENA project. We stream to clients running an open-source player with a choice of adaptation algorithms. By providi...

  9. ergm: A Package to Fit, Simulate and Diagnose Exponential-Family Models for Networks

    Directory of Open Access Journals (Sweden)

    David R. Hunter

    2008-12-01

    Full Text Available We describe some of the capabilities of the ergm package and the statistical theory underlying it. This package contains tools for accomplishing three important, and inter-related, tasks involving exponential-family random graph models (ERGMs: estimation, simulation, and goodness of fit. More precisely, ergm has the capability of approximating a maximum likelihood estimator for an ERGM given a network data set; simulating new network data sets from a fitted ERGM using Markov chain Monte Carlo; and assessing how well a fitted ERGM does at capturing characteristics of a particular network data set.

  10. Double and multiple knockout simulations for genome-scale metabolic network reconstructions.

    Science.gov (United States)

    Goldstein, Yaron Ab; Bockmayr, Alexander

    2015-01-01

    Constraint-based modeling of genome-scale metabolic network reconstructions has become a widely used approach in computational biology. Flux coupling analysis is a constraint-based method that analyses the impact of single reaction knockouts on other reactions in the network. We present an extension of flux coupling analysis for double and multiple gene or reaction knockouts, and develop corresponding algorithms for an in silico simulation. To evaluate our method, we perform a full single and double knockout analysis on a selection of genome-scale metabolic network reconstructions and compare the results. A prototype implementation of double knockout simulation is available at http://hoverboard.io/L4FC.

  11. Optimizing targeted vaccination across cyber-physical networks: an empirically based mathematical simulation study

    DEFF Research Database (Denmark)

    Mones, Enys; Stopczynski, Arkadiusz; Pentland, Alex 'Sandy'

    2018-01-01

    . If interruption of disease transmission is the goal, targeting requires knowledge of underlying person-to-person contact networks. Digital communication networks may reflect not only virtual but also physical interactions that could result in disease transmission, but the precise overlap between these cyber...... and physical networks has never been empirically explored in real-life settings. Here, we study the digital communication activity of more than 500 individuals along with their person-to-person contacts at a 5-min temporal resolution. We then simulate different disease transmission scenarios on the person......-to-person physical contact network to determine whether cyber communication networks can be harnessed to advance the goal of targeted vaccination for a disease spreading on the network of physical proximity. We show that individuals selected on the basis of their closeness centrality within cyber networks (what we...

  12. Towards Interactive Medical Content Delivery Between Simulated Body Sensor Networks and Practical Data Center.

    Science.gov (United States)

    Shi, Xiaobo; Li, Wei; Song, Jeungeun; Hossain, M Shamim; Mizanur Rahman, Sk Md; Alelaiwi, Abdulhameed

    2016-10-01

    With the development of IoT (Internet of Thing), big data analysis and cloud computing, traditional medical information system integrates with these new technologies. The establishment of cloud-based smart healthcare application gets more and more attention. In this paper, semi-physical simulation technology is applied to cloud-based smart healthcare system. The Body sensor network (BSN) of system transmit has two ways of data collection and transmission. The one is using practical BSN to collect data and transmitting it to the data center. The other is transmitting real medical data to practical data center by simulating BSN. In order to transmit real medical data to practical data center by simulating BSN under semi-physical simulation environment, this paper designs an OPNET packet structure, defines a gateway node model between simulating BSN and practical data center and builds a custom protocol stack. Moreover, this paper conducts a large amount of simulation on the real data transmission through simulation network connecting with practical network. The simulation result can provides a reference for parameter settings of fully practical network and reduces the cost of devices and personnel involved.

  13. Increasing Learner Retention in a Simulated Learning Network using Indirect Social Interaction

    NARCIS (Netherlands)

    Koper, Rob

    2004-01-01

    Please refer to original publication: Koper, E.J.R. (2005). Increasing Learner Retention in a Simulated Learning Network Using Indirect Social Interaction. Journal of Artificial Societies and Social Simulation vol. 8, no. 2. http://jasss.soc.surrey.ac.uk/8/2/5.html Software is only stored to ensure

  14. Digitalization and networking of analog simulators and portal images.

    Science.gov (United States)

    Pesznyák, Csilla; Zaránd, Pál; Mayer, Arpád

    2007-03-01

    Many departments have analog simulators and irradiation facilities (especially cobalt units) without electronic portal imaging. Import of the images into the R&V (Record & Verify) system is required. Simulator images are grabbed while portal films scanned by using a laser scanner and both converted into DICOM RT (Digital Imaging and Communications in Medicine Radiotherapy) images. Image intensifier output of a simulator and portal films are converted to DICOM RT images and used in clinical practice. The simulator software was developed in cooperation at the authors' hospital. The digitalization of analog simulators is a valuable updating in clinical use replacing screen-film technique. Film scanning and digitalization permit the electronic archiving of films. Conversion into DICOM RT images is a precondition of importing to the R&V system.

  15. Development of a pore network simulation model to study nonaqueous phase liquid dissolution

    Science.gov (United States)

    Dillard, Leslie A.; Blunt, Martin J.

    2000-01-01

    A pore network simulation model was developed to investigate the fundamental physics of nonequilibrium nonaqueous phase liquid (NAPL) dissolution. The network model is a lattice of cubic chambers and rectangular tubes that represent pore bodies and pore throats, respectively. Experimental data obtained by Powers [1992] were used to develop and validate the model. To ensure the network model was representative of a real porous medium, the pore size distribution of the network was calibrated by matching simulated and experimental drainage and imbibition capillary pressure-saturation curves. The predicted network residual styrene blob-size distribution was nearly identical to the observed distribution. The network model reproduced the observed hydraulic conductivity and produced relative permeability curves that were representative of a poorly consolidated sand. Aqueous-phase transport was represented by applying the equation for solute flux to the network tubes and solving for solute concentrations in the network chambers. Complete mixing was found to be an appropriate approximation for calculation of chamber concentrations. Mass transfer from NAPL blobs was represented using a corner diffusion model. Predicted results of solute concentration versus Peclet number and of modified Sherwood number versus Peclet number for the network model compare favorably with experimental data for the case in which NAPL blob dissolution was negligible. Predicted results of normalized effluent concentration versus pore volume for the network were similar to the experimental data for the case in which NAPL blob dissolution occurred with time.

  16. Variable Coding and Modulation Experiment Using NASA's Space Communication and Navigation Testbed

    Science.gov (United States)

    Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Tollis, Nicholas S.

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques in an operational system. The experimental nature of the Testbed allows for rapid demonstrations while using flight hardware in a deployed system within NASA's networks. One example is variable coding and modulation, which is a method to increase data-throughput in a communication link. This paper describes recent flight testing with variable coding and modulation over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Performance of the variable coding and modulation system is evaluated and compared to the capacity of the link, as well as standard NASA waveforms.

  17. Numerical simulation of fibrous biomaterials with randomly distributed fiber network structure.

    Science.gov (United States)

    Jin, Tao; Stanciulescu, Ilinca

    2016-08-01

    This paper presents a computational framework to simulate the mechanical behavior of fibrous biomaterials with randomly distributed fiber networks. A random walk algorithm is implemented to generate the synthetic fiber network in 2D used in simulations. The embedded fiber approach is then adopted to model the fibers as embedded truss elements in the ground matrix, which is essentially equivalent to the affine fiber kinematics. The fiber-matrix interaction is partially considered in the sense that the two material components deform together, but no relative movement is considered. A variational approach is carried out to derive the element residual and stiffness matrices for finite element method (FEM), in which material and geometric nonlinearities are both included. Using a data structure proposed to record the network geometric information, the fiber network is directly incorporated into the FEM simulation without significantly increasing the computational cost. A mesh sensitivity analysis is conducted to show the influence of mesh size on various simulation results. The proposed method can be easily combined with Monte Carlo (MC) simulations to include the influence of the stochastic nature of the network and capture the material behavior in an average sense. The computational framework proposed in this work goes midway between homogenizing the fiber network into the surrounding matrix and accounting for the fully coupled fiber-matrix interaction at the segment length scale, and can be used to study the connection between the microscopic structure and the macro-mechanical behavior of fibrous biomaterials with a reasonable computational cost.

  18. A simulated annealing heuristic for maximum correlation core/periphery partitioning of binary networks.

    Science.gov (United States)

    Brusco, Michael; Stolze, Hannah J; Hoffman, Michaela; Steinley, Douglas

    2017-01-01

    A popular objective criterion for partitioning a set of actors into core and periphery subsets is the maximization of the correlation between an ideal and observed structure associated with intra-core and intra-periphery ties. The resulting optimization problem has commonly been tackled using heuristic procedures such as relocation algorithms, genetic algorithms, and simulated annealing. In this paper, we present a computationally efficient simulated annealing algorithm for maximum correlation core/periphery partitioning of binary networks. The algorithm is evaluated using simulated networks consisting of up to 2000 actors and spanning a variety of densities for the intra-core, intra-periphery, and inter-core-periphery components of the network. Core/periphery analyses of problem solving, trust, and information sharing networks for the frontline employees and managers of a consumer packaged goods manufacturer are provided to illustrate the use of the model.

  19. Complex Network Simulation of Forest Network Spatial Pattern in Pearl River Delta

    Science.gov (United States)

    Zeng, Y.

    2017-09-01

    Forest network-construction uses for the method and model with the scale-free features of complex network theory based on random graph theory and dynamic network nodes which show a power-law distribution phenomenon. The model is suitable for ecological disturbance by larger ecological landscape Pearl River Delta consistent recovery. Remote sensing and GIS spatial data are available through the latest forest patches. A standard scale-free network node distribution model calculates the area of forest network's power-law distribution parameter value size; The recent existing forest polygons which are defined as nodes can compute the network nodes decaying index value of the network's degree distribution. The parameters of forest network are picked up then make a spatial transition to GIS real world models. Hence the connection is automatically generated by minimizing the ecological corridor by the least cost rule between the near nodes. Based on scale-free network node distribution requirements, select the number compared with less, a huge point of aggregation as a future forest planning network's main node, and put them with the existing node sequence comparison. By this theory, the forest ecological projects in the past avoid being fragmented, scattered disorderly phenomena. The previous regular forest networks can be reduced the required forest planting costs by this method. For ecological restoration of tropical and subtropical in south China areas, it will provide an effective method for the forest entering city project guidance and demonstration with other ecological networks (water, climate network, etc.) for networking a standard and base datum.

  20. Modeling radio link performance in UMTS W-CDMA network simulations

    DEFF Research Database (Denmark)

    Klingenbrunn, Thomas; Mogensen, Preben Elgaard

    2000-01-01

    This article presents a method to model the W-CDMA radio receiver performance, which is usable in network simulation tools for third generation mobile cellular systems. The method represents a technique to combine link level simulations with network level simulations. The method is derived from [1......], which defines a stochastic mapping function from a Signal-to-Interference Ratio into a Bit-Error-Rate for a TDMA system. However, in order to work in a W-CDMA based system, the fact that the Multiple-Access Interference in downlink consists of both Gaussian inter-cell interference and orthogonal intra...

  1. A versatile framework for simulating the dynamic mechanical structure of cytoskeletal networks

    CERN Document Server

    Freedman, Simon L; Hocky, Glen M; Dinner, Aaron R

    2016-01-01

    Computer simulations can aid in our understanding of how collective materials properties emerge from interactions between simple constituents. Here, we introduce a coarse- grained model of networks of actin filaments, myosin motors, and crosslinking proteins that enables simulation at biologically relevant time and length scales. We demonstrate that the model, with a consistent parameterization, qualitatively and quantitatively captures a suite of trends observed experimentally, including the statistics of filament fluctuations, mechanical responses to shear, motor motilities, and network rearrangements. The model can thus serve as a platform for interpretation and design of cytoskeletal materials experiments, as well as for further development of simulations incorporating active elements.

  2. STOMP: A Software Architecture for the Design and Simulation UAV-Based Sensor Networks

    Energy Technology Data Exchange (ETDEWEB)

    Jones, E D; Roberts, R S; Hsia, T C S

    2002-10-28

    This paper presents the Simulation, Tactical Operations and Mission Planning (STOMP) software architecture and framework for simulating, controlling and communicating with unmanned air vehicles (UAVs) servicing large distributed sensor networks. STOMP provides hardware-in-the-loop capability enabling real UAVs and sensors to feedback state information, route data and receive command and control requests while interacting with other real or virtual objects thereby enhancing support for simulation of dynamic and complex events.

  3. Intelligent Electric Power Systems with Active-Adaptive Electric Networks: Challenges for Simulation Tools

    Directory of Open Access Journals (Sweden)

    Ufa Ruslan A.

    2015-01-01

    Full Text Available The motivation of the presented research is based on the needs for development of new methods and tools for adequate simulation of intelligent electric power systems with active-adaptive electric networks (IES including Flexible Alternating Current Transmission System (FACTS devices. The key requirements for the simulation were formed. The presented analysis of simulation results of IES confirms the need to use a hybrid modelling approach.

  4. Simulation, State Estimation and Control of Nonlinear Superheater Attemporator using Neural Networks

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Sørensen, O.

    2000-01-01

    This paper considers the use of neural networks for nonlinear state estimation, system identification and control. As a case study we use data taken from a nonlinear injection valve for a superheater attemporator at a power plant. One neural network is trained as a nonlinear simulation model......-by-sample linearizations and state estimates provided by the observer network. Simulation studies show that the nonlinear observer-based control loop performs better than a similar control loop based on a linear observer....... of the process, then another network is trained to act as a combined state and parameter estimator for the process. The observer network incorporates smoothing of the parameter estimates in the form of regularization. A pole placement controller is designed which takes advantage of the sample...

  5. Simulation, State Estimation and Control of Nonlinear Superheater Attemporator using Neural Networks

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Sørensen, O.

    1999-01-01

    This paper considers the use of neural networks for nonlinear state estimation, system identification and control. As a case study we use data taken from a nonlinear injection valve for a superheater attemporator at a power plant. One neural network is trained as a nonlinear simulation model......-by-sample linearizations and state estimates provided by the observer network. Simulation studies show that the nonlinear observer-based control loop performs better than a similar control loop based on a linear observer....... of the process, then another network is trained to act as a combined state and parameter estimator for the process. The observer network incorporates smoothing of the parameter estimates in the form of regularization. A pole placement controller is designed which takes advantage of the sample...

  6. A Model to Simulate Multimodality in a Mesoscopic Dynamic Network Loading Framework

    Directory of Open Access Journals (Sweden)

    Massimo Di Gangi

    2017-01-01

    Full Text Available A dynamic network loading (DNL model using a mesoscopic approach is proposed to simulate a multimodal transport network considering en-route change of the transport modes. The classic mesoscopic approach, where packets of users belonging to the same mode move following a path, is modified to take into account multiple modes interacting with each other, simultaneously and on the same multimodal network. In particular, to simulate modal change, functional aspects of multimodal arcs have been developed; those arcs are properly located on the network where modal change occurs and users are packed (or unpacked in a new modal resource that moves up to destination or to another multimodal arc. A test on a simple network reproducing a real situation is performed in order to show model peculiarities; some indicators, used to describe performances of the considered transport system, are shown.

  7. Radial basis function (RBF) neural network control for mechanical systems design, analysis and Matlab simulation

    CERN Document Server

    Liu, Jinkun

    2013-01-01

    Radial Basis Function (RBF) Neural Network Control for Mechanical Systems is motivated by the need for systematic design approaches to stable adaptive control system design using neural network approximation-based techniques. The main objectives of the book are to introduce the concrete design methods and MATLAB simulation of stable adaptive RBF neural control strategies. In this book, a broad range of implementable neural network control design methods for mechanical systems are presented, such as robot manipulators, inverted pendulums, single link flexible joint robots, motors, etc. Advanced neural network controller design methods and their stability analysis are explored. The book provides readers with the fundamentals of neural network control system design.   This book is intended for the researchers in the fields of neural adaptive control, mechanical systems, Matlab simulation, engineering design, robotics and automation. Jinkun Liu is a professor at Beijing University of Aeronautics and Astronauti...

  8. Aerodynamic design of the National Rotor Testbed.

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, Christopher Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    A new wind turbine blade has been designed for the National Rotor Testbed (NRT) project and for future experiments at the Scaled Wind Farm Technology (SWiFT) facility with a specific focus on scaled wakes. This report shows the aerodynamic design of new blades that can produce a wake that has similitude to utility scale blades despite the difference in size and location in the atmospheric boundary layer. Dimensionless quantities circulation, induction, thrust coefficient, and tip-speed-ratio were kept equal between rotor scales in region 2 of operation. The new NRT design matched the aerodynamic quantities of the most common wind turbine in the United States, the GE 1.5sle turbine with 37c model blades. The NRT blade design is presented along with its performance subject to the winds at SWiFT. The design requirements determined by the SWiFT experimental test campaign are shown to be met.

  9. Recent Successes and Future Plans for NASA's Space Communications and Navigation Testbed on the International Space Station

    Science.gov (United States)

    Reinhart, Richard C.; Sankovic, John M.; Johnson, Sandra K.; Lux, James P.; Chelmins, David T.

    2014-01-01

    Flexible and extensible space communications architectures and technology are essential to enable future space exploration and science activities. NASA has championed the development of the Space Telecommunications Radio System (STRS) software defined radio (SDR) standard and the application of SDR technology to reduce the costs and risks of using SDRs for space missions, and has developed an on-orbit testbed to validate these capabilities. The Space Communications and Navigation (SCaN) Testbed (previously known as the Communications, Navigation, and Networking reConfigurable Testbed (CoNNeCT)) is advancing SDR, on-board networking, and navigation technologies by conducting space experiments aboard the International Space Station. During its first year(s) on-orbit, the SCaN Testbed has achieved considerable accomplishments to better understand SDRs and their applications. The SDR platforms and software waveforms on each SDR have over 1500 hours of operation and are performing as designed. The Ka-band SDR on the SCaN Testbed is NASAs first space Ka-band transceiver and is NASA's first Ka-band mission using the Space Network. This has provided exciting opportunities to operate at Ka-band and assist with on-orbit tests of NASA newest Tracking and Data Relay Satellites (TDRS). During its first year, SCaN Testbed completed its first on-orbit SDR reconfigurations. SDR reconfigurations occur when implementing new waveforms on an SDR. SDR reconfigurations allow a radio to change minor parameters, such as data rate, or complete functionality. New waveforms which provide new capability and are reusable across different missions provide long term value for reconfigurable platforms such as SDRs. The STRS Standard provides guidelines for new waveform development by third parties. Waveform development by organizations other than the platform provider offers NASA the ability to develop waveforms itself and reduce its dependence and costs on the platform developer. Each of these

  10. High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nation’s critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutual benefits in bolstering both institutions’ expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Tech’s goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nation’s critical cyber infrastructures exposed to wireless communications.

  11. Efficient Heuristics for Simulating Population Overflow in Parallel Networks

    NARCIS (Netherlands)

    Zaburnenko, T.S.; Nicola, V.F.

    2006-01-01

    In this paper we propose a state-dependent importance sampling heuristic to estimate the probability of population overflow in networks of parallel queues. This heuristic approximates the “optimal��? state-dependent change of measure without the need for costly optimization involved in other

  12. Simulation of traffic capacity of inland waterway network

    NARCIS (Netherlands)

    Chen, L.; Mou, J.; Ligteringen, H.

    2013-01-01

    The inland waterborne transportation is viewed as an economic, safe and environmentally friendly alternative to the congested road network. The traffic capacity are the critical indicator of the inland shipping performance. Actually, interacted under the complicated factors, it is challenging to

  13. Numerical simulation with finite element and artificial neural network ...

    Indian Academy of Sciences (India)

    Further, this database after the neural network training; is used to analyse measured material properties of different test pieces. The ANN predictions are reconfirmed with contact type finite element analysis for an arbitrary selected test sample. The methodology evolved in this work can be extended to predict material ...

  14. Modeling a Million-Node Slim Fly Network Using Parallel Discrete-Event Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wolfe, Noah; Carothers, Christopher; Mubarak, Misbah; Ross, Robert; Carns, Philip

    2016-05-15

    As supercomputers close in on exascale performance, the increased number of processors and processing power translates to an increased demand on the underlying network interconnect. The Slim Fly network topology, a new lowdiameter and low-latency interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this paper, we present a high-fidelity Slim Fly it-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate our Slim Fly model with the Kathareios et al. Slim Fly model results provided at moderately sized network scales. We further scale the model size up to n unprecedented 1 million compute nodes; and through visualization of network simulation metrics such as link bandwidth, packet latency, and port occupancy, we get an insight into the network behavior at the million-node scale. We also show linear strong scaling of the Slim Fly model on an Intel cluster achieving a peak event rate of 36 million events per second using 128 MPI tasks to process 7 billion events. Detailed analysis of the underlying discrete-event simulation performance shows that a million-node Slim Fly model simulation can execute in 198 seconds on the Intel cluster.

  15. COMPLEX NETWORK SIMULATION OF FOREST NETWORK SPATIAL PATTERN IN PEARL RIVER DELTA

    Directory of Open Access Journals (Sweden)

    Y. Zeng

    2017-09-01

    Full Text Available Forest network-construction uses for the method and model with the scale-free features of complex network theory based on random graph theory and dynamic network nodes which show a power-law distribution phenomenon. The model is suitable for ecological disturbance by larger ecological landscape Pearl River Delta consistent recovery. Remote sensing and GIS spatial data are available through the latest forest patches. A standard scale-free network node distribution model calculates the area of forest network’s power-law distribution parameter value size; The recent existing forest polygons which are defined as nodes can compute the network nodes decaying index value of the network’s degree distribution. The parameters of forest network are picked up then make a spatial transition to GIS real world models. Hence the connection is automatically generated by minimizing the ecological corridor by the least cost rule between the near nodes. Based on scale-free network node distribution requirements, select the number compared with less, a huge point of aggregation as a future forest planning network’s main node, and put them with the existing node sequence comparison. By this theory, the forest ecological projects in the past avoid being fragmented, scattered disorderly phenomena. The previous regular forest networks can be reduced the required forest planting costs by this method. For ecological restoration of tropical and subtropical in south China areas, it will provide an effective method for the forest entering city project guidance and demonstration with other ecological networks (water, climate network, etc. for networking a standard and base datum.

  16. Analysis of 100Mb/s Ethernet for the Whitney Commodity Computing Testbed

    Science.gov (United States)

    Fineberg, Samuel A.; Pedretti, Kevin T.; Kutler, Paul (Technical Monitor)

    1997-01-01

    We evaluate the performance of a Fast Ethernet network configured with a single large switch, a single hub, and a 4x4 2D torus topology in a testbed cluster of "commodity" Pentium Pro PCs. We also evaluated a mixed network composed of ethernet hubs and switches. An MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2) show that the torus network performs best for all sizes that we were able to test (up to 16 nodes). For larger networks the ethernet switch outperforms the hub, though its performance is far less than peak. The hub/switch combination tests indicate that the NAS parallel benchmarks are relatively insensitive to hub densities of less than 7 nodes per hub.

  17. Multiple Linear Regression Model Based on Neural Network and Its Application in the MBR Simulation

    Directory of Open Access Journals (Sweden)

    Chunqing Li

    2012-01-01

    Full Text Available The computer simulation of the membrane bioreactor MBR has become the research focus of the MBR simulation. In order to compensate for the defects, for example, long test period, high cost, invisible equipment seal, and so forth, on the basis of conducting in-depth study of the mathematical model of the MBR, combining with neural network theory, this paper proposed a three-dimensional simulation system for MBR wastewater treatment, with fast speed, high efficiency, and good visualization. The system is researched and developed with the hybrid programming of VC++ programming language and OpenGL, with a multifactor linear regression model of affecting MBR membrane fluxes based on neural network, applying modeling method of integer instead of float and quad tree recursion. The experiments show that the three-dimensional simulation system, using the above models and methods, has the inspiration and reference for the future research and application of the MBR simulation technology.

  18. A case for spiking neural network simulation based on configurable multiple-FPGA systems.

    Science.gov (United States)

    Yang, Shufan; Wu, Qiang; Li, Renfa

    2011-09-01

    Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.

  19. Simulating dynamic plastic continuous neural networks by finite elements.

    Science.gov (United States)

    Joghataie, Abdolreza; Torghabehi, Omid Oliyan

    2014-08-01

    We introduce dynamic plastic continuous neural network (DPCNN), which is comprised of neurons distributed in a nonlinear plastic medium where wire-like connections of neural networks are replaced with the continuous medium. We use finite element method to model the dynamic phenomenon of information processing within the DPCNNs. During the training, instead of weights, the properties of the continuous material at its different locations and some properties of neurons are modified. Input and output can be vectors and/or continuous functions over lines and/or areas. Delay and feedback from neurons to themselves and from outputs occur in the DPCNNs. We model a simple form of the DPCNN where the medium is a rectangular plate of bilinear material, and the neurons continuously fire a signal, which is a function of the horizontal displacement.

  20. Runtime Performance and Virtual Network Control Alternatives in VM-Based High-Fidelity Network Simulations

    Science.gov (United States)

    2012-12-01

    network emulation systems have been proposed, such as V-eM (Apostolopoulos and Hasapis 2006), DieCast (Gupta et al. 2008), VENICE (Liu, Raju, and...Proceedings of the 2006 3rd Symposium on Networked Systems Design and Implementation (NSDI’06), San Jose, CA, USA. Gupta, D., et al. 2008. “ DieCast

  1. Representing Dynamic Social Networks in Discrete Event Social Simulation

    Science.gov (United States)

    2010-12-01

    notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not...and applying social network change detection methods (SNCD) to model output. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...society. The action choice component of the conceptual model is based on the theory of planned behavior ( TPB ) (I. Ajzen 1991). The TPB states that an

  2. Analysis and Simulation of Hybrid Models for Reaction Networks

    OpenAIRE

    Kreim, Michael

    2014-01-01

    The dynamics of biochemical reaction networks can be described by a variety of models, like the Reaction Rate equation (RRE), the Chemical Master equation (CME) or the Fokker-Planck equation (FPE). In this thesis, the behaviour of these different models is analysed. It is shown that the FPE can be motivated as an approximation of the CME and convergence is proven. Furthermore, two hybrid models are constructed by combining different approaches and convergence properties are proven and discussed.

  3. The Living With a Star Space Environment Testbed Payload

    Science.gov (United States)

    Xapsos, Mike

    2015-01-01

    This presentation outlines a brief description of the Living With a Star (LWS) Program missions and detailed information about the Space Environment Testbed (SET) payload consisting of a space weather monitor and carrier containing 4 board experiments.

  4. Prognostics-Enabled Power Supply for ADAPT Testbed Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Ridgetop's role is to develop electronic prognostics for sensing power systems in support of NASA/Ames ADAPT testbed. The prognostic enabled power systems from...

  5. Simulating market dynamics : Interactions between consumer psychology and social networks

    NARCIS (Netherlands)

    Janssen, M.A; Jager, W.

    2003-01-01

    Markets can show different types of dynamics, from quiet markets dominated by one or a few products, to markets with continual penetration of new and reintroduced products. in a previous article we explored the dynamics of markets from a psychological perspective using a multi-agent simulation

  6. Credible Mobile and Ad Hoc Network Simulation-Based Studies

    Science.gov (United States)

    2006-10-26

    once duplicate (orange) Received the same packet twice 2-duplicates ( pink ) Received the same packet at least three times 4-duplicates (red) Received...mobility tool generators in NS-2 simulator. Version 1.0, beta, 2004. [6] S. Bajaj, L. Breslau, D. Estrin, K. Fall, S. Floyd , P. Haldar, M. Hand- ley, A

  7. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware

    Science.gov (United States)

    Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.

    2016-01-01

    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061

  8. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  9. A Simulation Study for Emergency/Disaster Management by Applying Complex Networks Theory

    Directory of Open Access Journals (Sweden)

    Li Jin

    2014-04-01

    Full Text Available Earthquakes, hurricanes, flooding and terrorist attacks pose a severe threat to our society. What’s more, when such a disaster happens, it can spread in a wide range with ubiquitous presence of a large-scale networked system. Therefore, the emergency/disaster management faces new challenges that the decision-makers have extra difficulties in perceiving the disaster dynamic spreading processes under this networked environment. This study tries to use the complex networks theory to tackle this complexity and the result shows the theory is a promising approach to support disaster/emergency management by focusing on simulation experiments of small world networks and scale free networks. The theory can be used to capture and describe the evolution mechanism, evolution discipline and overall behavior of a networked system. In particular, the complex networks theory is very strong at analyzing the complexity and dynamical changes of a networked system, which can improve the situation awareness after a disaster has occurred and help perceive its dynamic process, which is very important for high-quality decision making. In addition, this study also shows the use of the complex networks theory can build a visualized process to track the dynamic spreading of a disaster in a networked system.

  10. Experimental Evaluation of Simulation Abstractions for Wireless Sensor Network MAC Protocols

    Directory of Open Access Journals (Sweden)

    G. P. Halkes

    2010-01-01

    Full Text Available The evaluation of MAC protocols for Wireless Sensor Networks (WSNs is often performed through simulation. These simulations necessarily abstract away from reality in many ways. However, the impact of these abstractions on the results of the simulations has received only limited attention. Moreover, many studies on the accuracy of simulation have studied either the physical layer and per link effects or routing protocol effects. To the best of our knowledge, no other work has focused on the study of the simulation abstractions with respect to MAC protocol performance. In this paper, we present the results of an experimental study of two often used abstractions in the simulation of WSN MAC protocols. We show that a simple SNR-based reception model can provide quite accurate results for metrics commonly used to evaluate MAC protocols. Furthermore, we provide an analysis of what the main sources of deviation are and thereby how the simulations can be improved to provide even better results.

  11. Evaluasi Kinerja Layanan IPTV pada Jaringan Testbed WiMAX Berbasis Standar IEEE 802.16-2004

    Directory of Open Access Journals (Sweden)

    Prasetiyono Hari Mukti

    2015-09-01

    Full Text Available In this paper, a performance evaluation for IPTV Services over WiMAX testbed based on IEEE Standard 802.16-2004 will be described. The performance of the proposed system is evaluated in terms of delay, jitter, throughput and packet loss. Service performance evaluations are conducted on network topology of point-to-point in the variation of background traffic with different scheduling types. Background traffic is injected into the system to give the sense that the proposed system has variation traffic load. Scheduling type which are used in this paper are Best Effort (BE, Non-Real-Time Polling Service (nrtPS, Real-Time Polling Service (rtPS and Unsolicited Grant Service (UGS. The expemerintal results of IPTV service performance over the testbed network show that the maximum average of delay, jitter, packet loss and jitter are 16.581 ms, 58.515 ms, 0.67 Mbps dan 10.96%, respectively.

  12. Designing an autonomous helicopter testbed: From conception through implementation

    Science.gov (United States)

    Garcia, Richard D.

    Miniature Unmanned Aerial Vehicles (UAVs) are currently being researched for a wide range of tasks, including search and rescue, surveillance, reconnaissance, traffic monitoring, fire detection, pipe and electrical line inspection, and border patrol to name only a few of the application domains. Although small/miniature UAVs, including both Vertical Takeoff and Landing (VTOL) vehicles and small helicopters, have shown great potential in both civilian and military domains, including research and development, integration, prototyping, and field testing, these unmanned systems/vehicles are limited to only a handful of university labs. For VTOL type aircraft the number is less than fifteen worldwide! This lack of development is due to both the extensive time and cost required to design, integrate and test a fully operational prototype as well as the shortcomings of published materials to fully describe how to design and build a "complete" and "operational" prototype system. This dissertation overcomes existing barriers and limitations by describing and presenting in great detail every technical aspect of designing and integrating a small UAV helicopter including the on-board navigation controller, capable of fully autonomous takeoff, waypoint navigation, and landing. The presented research goes beyond previous works by designing the system as a testbed vehicle. This design aims to provide a general framework that will not only allow researchers the ability to supplement the system with new technologies but will also allow researchers to add innovation to the vehicle itself. Examples include modification or replacement of controllers, updated filtering and fusion techniques, addition or replacement of sensors, vision algorithms, Operating Systems (OS) changes or replacements, and platform modification or replacement. This is supported by the testbed's design to not only adhere to the technology it currently utilizes but to be general enough to adhere to a multitude of

  13. Event metadata records as a testbed for scalable data mining

    Science.gov (United States)

    van Gemmeren, P.; Malon, D.

    2010-04-01

    At a data rate of 200 hertz, event metadata records ("TAGs," in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise "data mining," but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  14. Event metadata records as a testbed for scalable data mining

    Energy Technology Data Exchange (ETDEWEB)

    Gemmeren, P van; Malon, D, E-mail: gemmeren@anl.go [Argonne National Laboratory, Argonne, Illinois 60439 (United States)

    2010-04-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  15. Discrete Event Modeling and Simulation-Driven Engineering for the ATLAS Data Acquisition Network

    CERN Document Server

    Bonaventura, Matias Alejandro; The ATLAS collaboration; Castro, Rodrigo Daniel

    2016-01-01

    We present an iterative and incremental development methodology for simulation models in network engineering projects. Driven by the DEVS (Discrete Event Systems Specification) formal framework for modeling and simulation we assist network design, test, analysis and optimization processes. A practical application of the methodology is presented for a case study in the ATLAS particle physics detector, the largest scientific experiment built by man where scientists around the globe search for answers about the origins of the universe. The ATLAS data network convey real-time information produced by physics detectors as beams of particles collide. The produced sub-atomic evidences must be filtered and recorded for further offline scrutiny. Due to the criticality of the transported data, networks and applications undergo careful engineering processes with stringent quality of service requirements. A tight project schedule imposes time pressure on design decisions, while rapid technology evolution widens the palett...

  16. Brownian dynamics simulation of insulin microsphere formation from break-up of a fractal network.

    Science.gov (United States)

    Li, Wei; Gunton, J D; Khan, Siddique J; Schoelz, J K; Chakrabarti, A

    2011-01-14

    Motivated by a recent experiment on insulin microsphere formation where polyethylene glycol (PEG) is used as the precipitating agent, we have developed a simple theoretical model that can predict the formation of a fractal network of insulin monomers and the subsequent break-up of the fractal network into microsphere aggregates. In our approach the effect of PEG on insulin is modeled via a standard depletion attraction mechanism via the Asakura-Oosawa model. We show that even in the context of this simple model, it is possible to mimic important aspects of the insulin experiment in a brownian dynamics simulation. We simulate the effect of changing temperature in our model by changing the well depth of the Asakura-Oosawa potential. A fractal network is observed in a "deep quench" of the system, followed by a "heating" that results in a break-up of the network and subsequent formation of microspheres.

  17. Application of artificial neural networks to identify equilibration in computer simulations

    Science.gov (United States)

    Leibowitz, Mitchell H.; Miller, Evan D.; Henry, Michael M.; Jankowski, Eric

    2017-11-01

    Determining which microstates generated by a thermodynamic simulation are representative of the ensemble for which sampling is desired is a ubiquitous, underspecified problem. Artificial neural networks are one type of machine learning algorithm that can provide a reproducible way to apply pattern recognition heuristics to underspecified problems. Here we use the open-source TensorFlow machine learning library and apply it to the problem of identifying which hypothetical observation sequences from a computer simulation are “equilibrated” and which are not. We generate training populations and test populations of observation sequences with embedded linear and exponential correlations. We train a two-neuron artificial network to distinguish the correlated and uncorrelated sequences. We find that this simple network is good enough for > 98% accuracy in identifying exponentially-decaying energy trajectories from molecular simulations.

  18. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Science.gov (United States)

    Chiadamrong, N.; Piyathanavong, V.

    2017-04-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  19. Simulation and Modeling of a New Medium Access Control Scheme for Multi-Beam Directional Networking

    Science.gov (United States)

    2017-03-03

    implement our protocol in both simula- tion and a new Extendable Mobile Ad -hoc Network Emula- tor (EMANE) model that allows for real-time, high fidelity...issues, where the amount of data passed between the servers is too high, and 2) computation issues, where calculating the interference on the packets...developed a custom discrete event simulator in C++, and a new Ex- tendable Mobile Ad -hoc Network Emulator (EMANE) [10] model. These tools are used to both

  20. Double and multiple knockout simulations for genome-scale metabolic network reconstructions

    OpenAIRE

    Goldstein, Yaron AB; Bockmayr, Alexander

    2015-01-01

    Background Constraint-based modeling of genome-scale metabolic network reconstructions has become a widely used approach in computational biology. Flux coupling analysis is a constraint-based method that analyses the impact of single reaction knockouts on other reactions in the network. Results We present an extension of flux coupling analysis for double and multiple gene or reaction knockouts, and develop corresponding algorithms for an in silico simulation. To evaluate our method, we perfor...

  1. Simulated Annealing in Optimization of Energy Production in a Water Supply Network

    OpenAIRE

    Almeida Samora, Irene; Franca, Mário J.; Schleiss, Anton; Helena M. Ramos

    2016-01-01

    In water supply systems, the potential exists for micro-hydropower that uses the pressure excess in the networks to produce electricity. However, because urban drinking water networks are complex systems in which flows and pressure vary constantly, identification of the ideal locations for turbines is not straightforward, and assessment implies the need for simulation. In this paper, an optimization algorithm is proposed to provide a selection of optimal locations for the installation of a gi...

  2. 802.11s based multi-radio multi-channel mesh networking for fractionated spacecraft

    Science.gov (United States)

    Michel, Tony; Thapa, Bishal; Taylor, Steve

    802.11s is a new IEEE standard for mesh networking. It defines the protocols needed to build mobile ad hoc networks that operate over 802.11a, b, g and n waveforms running on inexpensive, and high performance commercial WiFi stations. We have developed a new capability to add to the 802.11s that uses multiple directional radio links that can operate simultaneously within a single mesh node. This is the basis of our multi-channel multi-radio mesh network used in the DARPA F6 program called F6Net. We have developed an analysis and emulation facility that lets us model the F6Net and evaluate the performance in a real world experimentation setup. This paper presents an “ Over-the-Air” experimentation testbed that uses standard Commercial Off-The-Shelf (COTS) 2.4GHz WiFi dongles in an indoor environment, and a shared-code simulation testbed that uses hardware simulated drivers within NS3's channel simulation facility to test 80211s network. To the best of our knowledge, this is the first work that provides a comprehensive evaluation platform with a full-fledged COTS hardware/software prototype to evaluate 802.11s network. Furthermore, we explain the design and development of multi-radio mesh extension for 802.11s that yields a robust and scalable mesh network suitable for clusters of LEO satellites.

  3. Open-Source Based Testbed for Multioperator 4G/5G Infrastructure Sharing in Virtual Environments

    Directory of Open Access Journals (Sweden)

    Ricardo Marco Alaez

    2017-01-01

    Full Text Available Fourth-Generation (4G mobile networks are based on Long-Term Evolution (LTE technologies and are being deployed worldwide, while research on further evolution towards the Fifth Generation (5G has been recently initiated. 5G will be featured with advanced network infrastructure sharing capabilities among different operators. Therefore, an open-source implementation of 4G/5G networks with this capability is crucial to enable early research in this area. The main contribution of this paper is the design and implementation of such a 4G/5G open-source testbed to investigate multioperator infrastructure sharing capabilities executed in virtual architectures. The proposed design and implementation enable the virtualization and sharing of some of the components of the LTE architecture. A testbed has been implemented and validated with intensive empirical experiments conducted to validate the suitability of virtualizing LTE components in virtual infrastructures (i.e., infrastructures with multitenancy sharing capabilities. The impact of the proposed technologies can lead to significant saving of both capital and operational costs for mobile telecommunication operators.

  4. Attaining Realistic Simulations of Mobile Ad-hoc Networks

    Science.gov (United States)

    2010-06-01

    Lastly every MANET faces higher security risks either through malicious or poorly configured nodes. The fact that MANET traffic is dependent on...are being developed and advertised as secure and reliable but the simulation models are unable to provide an accurate depiction of how the new...use of the Institute of Telematics techniques that alter propagation models within NS-2 and generate the resulting model in LaTeX [20]. These models

  5. Large-scale lattice-Boltzmann simulations over lambda networks

    Science.gov (United States)

    Saksena, R.; Coveney, P. V.; Pinning, R.; Booth, S.

    Amphiphilic molecules are of immense industrial importance, mainly due to their tendency to align at interfaces in a solution of immiscible species, e.g., oil and water, thereby reducing surface tension. Depending on the concentration of amphiphiles in the solution, they may assemble into a variety of morphologies, such as lamellae, micelles, sponge and cubic bicontinuous structures exhibiting non-trivial rheological properties. The main objective of this work is to study the rheological properties of very large, defect-containing gyroidal systems (of up to 10243 lattice sites) using the lattice-Boltzmann method. Memory requirements for the simulation of such large lattices exceed that available to us on most supercomputers and so we use MPICH-G2/MPIg to investigate geographically distributed domain decomposition simulations across HPCx in the UK and TeraGrid in the US. Use of MPICH-G2/MPIg requires the port-forwarder to work with the grid middleware on HPCx. Data from the simulations is streamed to a high performance visualisation resource at UCL (London) for rendering and visualisation. Lighting the Blue Touchpaper for UK e-Science - Closing Conference of ESLEA Project March 26-28 2007 The George Hotel, Edinburgh, UK

  6. Compensatory plasticity in the action observation network: virtual lesions of STS enhance anticipatory simulation of seen actions

    National Research Council Canada - National Science Library

    Avenanti, Alessio; Annella, Laura; Candidi, Matteo; Urgesi, Cosimo; Aglioti, Salvatore M

    2013-01-01

    .... Such motor facilitation indexes the anticipatory simulation of observed (implied) actions and likely reflects computations occurring in the parietofrontal nodes of a cortical network subserving action perception...

  7. Multiscale methodology for bone remodelling simulation using coupled finite element and neural network computation.

    Science.gov (United States)

    Hambli, Ridha; Katerchi, Houda; Benhamou, Claude-Laurent

    2011-02-01

    The aim of this paper is to develop a multiscale hierarchical hybrid model based on finite element analysis and neural network computation to link mesoscopic scale (trabecular network level) and macroscopic (whole bone level) to simulate the process of bone remodelling. As whole bone simulation, including the 3D reconstruction of trabecular level bone, is time consuming, finite element calculation is only performed at the macroscopic level, whilst trained neural networks are employed as numerical substitutes for the finite element code needed for the mesoscale prediction. The bone mechanical properties are updated at the macroscopic scale depending on the morphological and mechanical adaptation at the mesoscopic scale computed by the trained neural network. The digital image-based modelling technique using μ-CT and voxel finite element analysis is used to capture volume elements representative of 2 mm³ at the mesoscale level of the femoral head. The input data for the artificial neural network are a set of bone material parameters, boundary conditions and the applied stress. The output data are the updated bone properties and some trabecular bone factors. The current approach is the first model, to our knowledge, that incorporates both finite element analysis and neural network computation to rapidly simulate multilevel bone adaptation.

  8. The Framework for Simulation of Bioinspired Security Mechanisms against Network Infrastructure Attacks

    Directory of Open Access Journals (Sweden)

    Andrey Shorov

    2014-01-01

    Full Text Available The paper outlines a bioinspired approach named “network nervous system" and methods of simulation of infrastructure attacks and protection mechanisms based on this approach. The protection mechanisms based on this approach consist of distributed prosedures of information collection and processing, which coordinate the activities of the main devices of a computer network, identify attacks, and determine nessesary countermeasures. Attacks and protection mechanisms are specified as structural models using a set-theoretic approach. An environment for simulation of protection mechanisms based on the biological metaphor is considered; the experiments demonstrating the effectiveness of the protection mechanisms are described.

  9. The simulation system for developing and testing protection methods against DDoS-attacks with the ability to connect the real nodes

    Directory of Open Access Journals (Sweden)

    K. Borisenko

    2015-12-01

    Full Text Available Abstract. This paper is devoted to simulation system for security process modeling in computer networks. Created system allows to significantly easier construct topologies and scenarios of clients’ behavior for making experiments in comparison of testbed solutions. In addition, system allows embedding known or novel architecture-dependent defense methods on any of network’s nodes and on real server for improving accuracy of experiments.

  10. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    Directory of Open Access Journals (Sweden)

    Kit eCheung

    2016-01-01

    Full Text Available NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs. Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimised performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP rule for learning. A 6-FPGA system can simulate a network of up to approximately 600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  11. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    Science.gov (United States)

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  12. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    Science.gov (United States)

    Cheung, Kit; Schultz, Simon R.; Luk, Wayne

    2016-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  13. PAX: A mixed hardware/software simulation platform for spiking neural networks.

    Science.gov (United States)

    Renaud, S; Tomas, J; Lewis, N; Bornat, Y; Daouzli, A; Rudolph, M; Destexhe, A; Saïghi, S

    2010-09-01

    Many hardware-based solutions now exist for the simulation of bio-like neural networks. Less conventional than software-based systems, these types of simulators generally combine digital and analog forms of computation. In this paper we present a mixed hardware-software platform, specifically designed for the simulation of spiking neural networks, using conductance-based models of neurons and synaptic connections with dynamic adaptation rules (Spike-Timing-Dependent Plasticity). The neurons and networks are configurable, and are computed in 'biological real time' by which we mean that the difference between simulated time and simulation time is guaranteed lower than 50 mus. After presenting the issues and context involved in the design and use of hardware-based spiking neural networks, we describe the analog neuromimetic integrated circuits which form the core of the platform. We then explain the organization and computation principles of the modules within the platform, and present experimental results which validate the system. Designed as a tool for computational neuroscience, the platform is exploited in collaborative research projects together with neurobiology and computer science partners. Copyright 2010 Elsevier Ltd. All rights reserved.

  14. A SIMULATION OF THE PENICILLIN G PRODUCTION BIOPROCESS APPLYING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A.J.G. da Cruz

    1997-12-01

    Full Text Available The production of penicillin G by Penicillium chrysogenum IFO 8644 was simulated employing a feedforward neural network with three layers. The neural network training procedure used an algorithm combining two procedures: random search and backpropagation. The results of this approach were very promising, and it was observed that the neural network was able to accurately describe the nonlinear behavior of the process. Besides, the results showed that this technique can be successfully applied to control process algorithms due to its long processing time and its flexibility in the incorporation of new data

  15. Customer social network affects marketing strategy: A simulation analysis based on competitive diffusion model

    Science.gov (United States)

    Hou, Rui; Wu, Jiawen; Du, Helen S.

    2017-03-01

    To explain the competition phenomenon and results between QQ and MSN (China) in the Chinese instant messaging software market, this paper developed a new population competition model based on customer social network. The simulation results show that the firm whose product with greater network externality effect will gain more market share than its rival when the same marketing strategy is used. The firm with the advantage of time, derived from the initial scale effect will become more competitive than its rival when facing a group of common penguin customers within a social network, verifying the winner-take-all phenomenon in this case.

  16. Building a Community of Practice for Researchers: The International Network for Simulation-Based Pediatric Innovation, Research and Education.

    Science.gov (United States)

    Cheng, Adam; Auerbach, Marc; Calhoun, Aaron; Mackinnon, Ralph; Chang, Todd P; Nadkarni, Vinay; Hunt, Elizabeth A; Duval-Arnould, Jordan; Peiris, Nicola; Kessler, David

    2017-11-08

    The scope and breadth of simulation-based research is growing rapidly; however, few mechanisms exist for conducting multicenter, collaborative research. Failure to foster collaborative research efforts is a critical gap that lies in the path of advancing healthcare simulation. The 2017 Research Summit hosted by the Society for Simulation in Healthcare highlighted how simulation-based research networks can produce studies that positively impact the delivery of healthcare. In 2011, the International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE) was formed to facilitate multicenter, collaborative simulation-based research with the aim of developing a community of practice for simulation researchers. Since its formation, the network has successfully completed and published numerous collaborative research projects. In this article, we describe INSPIRE's history, structure, and internal processes with the goal of highlighting the community of practice model for other groups seeking to form a simulation-based research network.

  17. A simulation model for aligning smart home networks and deploying smart objects

    DEFF Research Database (Denmark)

    Lynggaard, Per

    Smart homes use sensor based networks to capture activities and offer learned services to the user. These smart home networks are challenging because they mainly use wireless communication at frequencies that are shared with other services and equipments. One of the major challenges...... is the interferences produced by WiFi access points in smart home networks which are expensive to overcome in terms of battery energy. Currently, different method exists to handle this. However, they use complex mechanisms such as sharing frequencies, sharing time slots, and spatial reuse of frequencies. This paper...... introduces a unique concept which saves battery energy and lowers the interference level by simulating the network alignment and assign the necessary amount of transmit power to each individual network node and finally, deploy the smart objects. The needed transmit powers are calculated by the presented...

  18. Statistics of interacting networks with extreme preferred degrees: Simulation results and theoretical approaches

    Science.gov (United States)

    Liu, Wenjia; Schmittmann, Beate; Zia, R. K. P.

    2012-02-01

    Network studies have played a central role for understanding many systems in nature - e.g., physical, biological, and social. So far, much of the focus has been the statistics of networks in isolation. Yet, many networks in the world are coupled to each other. Recently, we considered this issue, in the context of two interacting social networks. In particular, We studied networks with two different preferred degrees, modeling, say, introverts vs. extroverts, with a variety of ``rules for engagement.'' As a first step towards an analytically accessible theory, we restrict our attention to an ``extreme scenario'': The introverts prefer zero contacts while the extroverts like to befriend everyone in the society. In this ``maximally frustrated'' system, the degree distributions, as well as the statistics of cross-links (between the two groups), can depend sensitively on how a node (individual) creates/breaks its connections. The simulation results can be reasonably well understood in terms of an approximate theory.

  19. Agent-based simulations of emotion spreading in online social networks

    CERN Document Server

    Šuvakov, Milovan; Schweitzer, Frank; Tadić, Bosiljka

    2012-01-01

    Quantitative analysis of empirical data from online social networks reveals group dynamics in which emotions are involved (\\v{S}uvakov et al). Full understanding of the underlying mechanisms, however, remains a challenging task. Using agent-based computer simulations, in this paper we study dynamics of emotional communications in online social networks. The rules that guide how the agents interact are motivated, and the realistic network structure and some important parameters are inferred from the empirical dataset of \\texttt{MySpace} social network. Agent's emotional state is characterized by two variables representing psychological arousal---reactivity to stimuli, and valence---attractiveness or aversiveness, by which common emotions can be defined. Agent's action is triggered by increased arousal. High-resolution dynamics is implemented where each message carrying agent's emotion along the network link is identified and its effect on the recipient agent is considered as continuously aging in time. Our res...

  20. Method of construction of rational corporate network using the simulation model

    Directory of Open Access Journals (Sweden)

    V.N. Pakhomovа

    2013-06-01

    Full Text Available Purpose. Search for new options of the transition from Ethernet technology. Methodology. Physical structuring of the Fast Ethernet network based on hubs and logical structuring of Fast Ethernet network using commutators. Organization of VLAN based on ports grouping and in accordance with the standard IEEE 802 .1Q. Findings. The options for improving of the Ethernet network are proposed. According to the Fast Ethernet and VLAN technologies on the simulation models in packages NetCraker and Cisco Packet Traker respectively. Origiality. The technique of designing of local area network using the VLAN technology is proposed. Practical value.Each of the options of "Dniprozaliznychproekt" network improving has its advantages. Transition from the Ethernet to Fast Ethernet technology is simple and economical, it requires only one commutator, when the VLAN organization requires at least two. VLAN technology, however, has the following advantages: reducing the load on the network, isolation of the broadcast traffic, change of the logical network structure without changing its physical structure, improving the network security. The transition from Ethernet to the VLAN technology allows you to separate the physical topology from the logical one, and the format of the ÌEEE 802.1Q standard frames allows you to simplify the process of virtual networks implementation to enterprises.

  1. Pareto Optimal Solutions for Network Defense Strategy Selection Simulator in Multi-Objective Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Yang Sun

    2018-01-01

    Full Text Available Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games.

  2. Versatile Networks of Simulated Spiking Neurons Displaying Winner-Take-All Behavior

    Directory of Open Access Journals (Sweden)

    Yanqing eChen

    2013-03-01

    Full Text Available We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS. In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid Brain-Based-Device (BBD under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.

  3. High power fuel cell simulator based on artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Chavez-Ramirez, Abraham U.; Munoz-Guerrero, Roberto [Departamento de Ingenieria Electrica, CINVESTAV-IPN. Av. Instituto Politecnico Nacional No. 2508, D.F. CP 07360 (Mexico); Duron-Torres, S.M. [Unidad Academica de Ciencias Quimicas, Universidad Autonoma de Zacatecas, Campus Siglo XXI, Edif. 6 (Mexico); Ferraro, M.; Brunaccini, G.; Sergi, F.; Antonucci, V. [CNR-ITAE, Via Salita S. Lucia sopra Contesse 5-98126 Messina (Italy); Arriaga, L.G. [Centro de Investigacion y Desarrollo Tecnologico en Electroquimica S.C., Parque Tecnologico Queretaro, Sanfandila, Pedro Escobedo, Queretaro (Mexico)

    2010-11-15

    Artificial Neural Network (ANN) has become a powerful modeling tool for predicting the performance of complex systems with no well-known variable relationships due to the inherent properties. A commercial Polymeric Electrolyte Membrane fuel cell (PEMFC) stack (5 kW) was modeled successfully using this tool, increasing the number of test into the 7 inputs - 2 outputs-dimensional spaces in the shortest time, acquiring only a small amount of experimental data. Some parameters could not be measured easily on the real system in experimental tests; however, by receiving the data from PEMFC, the ANN could be trained to learn the internal relationships that govern this system, and predict its behavior without any physical equations. Confident accuracy was achieved in this work making possible to import this tool to complex systems and applications. (author)

  4. A knowledge based software engineering environment testbed

    Science.gov (United States)

    Gill, C.; Reedy, A.; Baker, L.

    1985-01-01

    The Carnegie Group Incorporated and Boeing Computer Services Company are developing a testbed which will provide a framework for integrating conventional software engineering tools with Artifical Intelligence (AI) tools to promote automation and productivity. The emphasis is on the transfer of AI technology to the software development process. Experiments relate to AI issues such as scaling up, inference, and knowledge representation. In its first year, the project has created a model of software development by representing software activities; developed a module representation formalism to specify the behavior and structure of software objects; integrated the model with the formalism to identify shared representation and inheritance mechanisms; demonstrated object programming by writing procedures and applying them to software objects; used data-directed and goal-directed reasoning to, respectively, infer the cause of bugs and evaluate the appropriateness of a configuration; and demonstrated knowledge-based graphics. Future plans include introduction of knowledge-based systems for rapid prototyping or rescheduling; natural language interfaces; blackboard architecture; and distributed processing

  5. Optical testbed for the LISA phasemeter

    Science.gov (United States)

    Schwarze, T. S.; Fernández Barranco, G.; Penkert, D.; Gerberding, O.; Heinzel, G.; Danzmann, K.

    2016-05-01

    The planned spaceborne gravitational wave detector LISA will allow the detection of gravitational waves at frequencies between 0.1 mHz and 1 Hz. A breadboard model for the metrology system aka the phasemeter was developed in the scope of an ESA technology development project by a collaboration between the Albert Einstein Institute, the Technical University of Denmark and the Danish industry partner Axcon Aps. It in particular provides the electronic readout of the main interferometer phases besides auxiliary functions. These include clock noise transfer, ADC pilot tone correction, inter-satellite ranging and data transfer. Besides in LISA, the phasemeter can also be applied in future satellite geodesy missions. Here we show the planning and advances in the implementation of an optical testbed for the full metrology chain. It is based on an ultra-stable hexagonal optical bench. This bench allows the generation of three unequal heterodyne beatnotes with a zero phase combination, thus providing the possibility to probe the phase readout for non-linearities in an optical three signal test. Additionally, the utilization of three independent phasemeters will allow the testing of the auxiliary functions. Once working, components can individually be replaced with flight-qualified hardware in this setup.

  6. Further progress in watermark evaluation testbed (WET)

    Science.gov (United States)

    Kim, Hyung C.; Lin, Eugene T.; Guitart, Oriol; Delp, Edward J., III

    2005-03-01

    While Digital Watermarking has received much attention in recent years, it is still a relatively young technology. There are few accepted tools/metrics that can be used to evaluate the suitability of a watermarking technique for a specific application. This lack of a universally adopted set of metrics/methods has motivated us to develop a web-based digital watermark evaluation system called the Watermark Evaluation Testbed or WET. There have been more improvements over the first version of WET. We implemented batch mode with a queue that allows for user submitted jobs. In addition to StirMark 3.1 as an attack module, we added attack modules based on StirMark 4.0. For a new image fidelity measure, we evaluate conditional entropy as an image fidelity measure for different watermarking algorithms and different attacks. Also, we show the results of curve fitting the Receiver Operating Characteristic (ROC) analysis data using the Parzen window density estimation. The curve fits the data closely while having only two parameters to estimate.

  7. Enhancing the NS-2 Network Simulator for Near Real-Time Control Feedback and Distributed Simulation

    Science.gov (United States)

    2009-03-21

    visualization of the simulation as it executes. 12 III. Command Feedback Design and Implementation U ser -simulator interaction is the key driver of this...TCL script. As the TCL script is parsed, a SocketListener object is created within C++. The SocketLis- tener object creates a new Boost::Thread [4] and

  8. Some issues related to simulation of the tracking and communications computer network

    Science.gov (United States)

    Lacovara, Robert C.

    1989-01-01

    The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.

  9. The Virtual Brain: a simulator of primate brain network dynamics

    Directory of Open Access Journals (Sweden)

    Paula eSanz Leon

    2013-06-01

    Full Text Available We present TheVirtualBrain (TVB, a neuroinformatics platform for full brainnetwork simulations using biologically realistic connectivity. This simulationenvironment enables the model-based inference of neurophysiological mechanismsacross different brain scales that underlie the generation of macroscopicneuroimaging signals including functional MRI (fMRI, EEG and MEG. Researchersfrom different backgrounds can benefit from an integrative software platformincluding a supporting framework for data management (generation,organization, storage, integration and sharing and a simulation core writtenin Python. TVB allows the reproduction and evaluation of personalizedconfigurations of the brain by using individual subject data. Thispersonalization facilitates an exploration of the consequences of pathologicalchanges in the system, permitting to investigate potential ways to counteractsuch unfavorable processes. The architecture of TVB supports interaction withMATLAB packages, for example, the well known Brain Connectivity Toolbox. TVBcan be used in a client-server configuration, such that it can be remotelyaccessed through the Internet thanks to its web-basedHTML5, JS and WebGL graphical user interface. TVB is alsoaccessible as a standalone cross-platform Python library and application, andusers can interact with the scientific core through the scripting interfaceIDLE, enabling easy modeling, development and debugging of the scientifickernel. This second interface makes TVB extensible by combining it with otherlibraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to thedevelopment of TVB, the architecture and features of its major softwarecomponents as well as potential neuroscience applications.

  10. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    Science.gov (United States)

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  11. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    Science.gov (United States)

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  12. ezBioNet: A modeling and simulation system for analyzing biological reaction networks

    Science.gov (United States)

    Yu, Seok Jong; Tung, Thai Quang; Park, Junho; Lim, Jongtae; Yoo, Jaesoo

    2012-10-01

    To achieve robustness against living environments, a living organism is composed of complicated regulatory mechanisms ranging from gene regulations to signal transduction. If such life phenomena are to be understand, an integrated analysis tool that should have modeling and simulation functions for biological reactions, as well as new experimental methods for measuring biological phenomena, is fundamentally required. We have designed and implemented modeling and simulation software (ezBioNet) for analyzing biological reaction networks. The software can simultaneously perform an integrated modeling of various responses occurring in cells, ranging from gene expressions to signaling processes. To support massive analysis of biological networks, we have constructed a server-side simulation system (VCellSim) that can perform ordinary differential equations (ODE) analysis, sensitivity analysis, and parameter estimates. ezBioNet integrates the BioModel database by connecting the european bioinformatics institute (EBI) servers through Web services APIs and supports the handling of systems biology markup language (SBML) files. In addition, we employed eclipse RCP (rich client platform) which is a powerful modularity framework allowing various functional expansions. ezBioNet is intended to be an easy-to-use modeling tool, as well as a simulation system, to understand the control mechanism by monitoring the change of each component in a biological network. A researcher may perform the kinetic modeling and execute the simulation. The simulation result can be managed and visualized on ezBioNet, which is freely available at http://ezbionet.cbnu.ac.kr.

  13. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2017-06-01

    Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  14. Inference, simulation, modeling, and analysis of complex networks, with special emphasis on complex networks in systems biology

    Science.gov (United States)

    Christensen, Claire Petra

    Across diverse fields ranging from physics to biology, sociology, and economics, the technological advances of the past decade have engendered an unprecedented explosion of data on highly complex systems with thousands, if not millions of interacting components. These systems exist at many scales of size and complexity, and it is becoming ever-more apparent that they are, in fact, universal, arising in every field of study. Moreover, they share fundamental properties---chief among these, that the individual interactions of their constituent parts may be well-understood, but the characteristic behaviour produced by the confluence of these interactions---by these complex networks---is unpredictable; in a nutshell, the whole is more than the sum of its parts. There is, perhaps, no better illustration of this concept than the discoveries being made regarding complex networks in the biological sciences. In particular, though the sequencing of the human genome in 2003 was a remarkable feat, scientists understand that the "cellular-level blueprints" for the human being are cellular-level parts lists, but they say nothing (explicitly) about cellular-level processes. The challenge of modern molecular biology is to understand these processes in terms of the networks of parts---in terms of the interactions among proteins, enzymes, genes, and metabolites---as it is these processes that ultimately differentiate animate from inanimate, giving rise to life! It is the goal of systems biology---an umbrella field encapsulating everything from molecular biology to epidemiology in social systems---to understand processes in terms of fundamental networks of core biological parts, be they proteins or people. By virtue of the fact that there are literally countless complex systems, not to mention tools and techniques used to infer, simulate, analyze, and model these systems, it is impossible to give a truly comprehensive account of the history and study of complex systems. The author

  15. Advanced turboprop testbed systems study. Volume 1: Testbed program objectives and priorities, drive system and aircraft design studies, evaluation and recommendations and wind tunnel test plans

    Science.gov (United States)

    Bradley, E. S.; Little, B. H.; Warnock, W.; Jenness, C. M.; Wilson, J. M.; Powell, C. W.; Shoaf, L.

    1982-01-01

    The establishment of propfan technology readiness was determined and candidate drive systems for propfan application were identified. Candidate testbed aircraft were investigated for testbed aircraft suitability and four aircraft selected as possible propfan testbed vehicles. An evaluation of the four candidates was performed and the Boeing KC-135A and the Gulfstream American Gulfstream II recommended as the most suitable aircraft for test application. Conceptual designs of the two recommended aircraft were performed and cost and schedule data for the entire testbed program were generated. The program total cost was estimated and a wind tunnel program cost and schedule is generated in support of the testbed program.

  16. Design and Study of Cognitive Network Physical Layer Simulation Platform

    Directory of Open Access Journals (Sweden)

    Yongli An

    2014-01-01

    Full Text Available Cognitive radio technology has received wide attention for its ability to sense and use idle frequency. IEEE 802.22 WRAN, the first to follow the standard in cognitive radio technology, is featured by spectrum sensing and wireless data transmission. As far as wireless transmission is concerned, the availability and implementation of a mature and robust physical layer algorithm are essential to high performance. For the physical layer of WRAN using OFDMA technology, this paper proposes a synchronization algorithm and at the same time provides a public platform for the improvement and verification of that new algorithm. The simulation results show that the performance of the platform is highly close to the theoretical value.

  17. Interfacing Space Communications and Navigation Network Simulation with Distributed System Integration Laboratories (DSIL)

    Science.gov (United States)

    Jennings, Esther H.; Nguyen, Sam P.; Wang, Shin-Ywan; Woo, Simon S.

    2008-01-01

    NASA's planned Lunar missions will involve multiple NASA centers where each participating center has a specific role and specialization. In this vision, the Constellation program (CxP)'s Distributed System Integration Laboratories (DSIL) architecture consist of multiple System Integration Labs (SILs), with simulators, emulators, testlabs and control centers interacting with each other over a broadband network to perform test and verification for mission scenarios. To support the end-to-end simulation and emulation effort of NASA' exploration initiatives, different NASA centers are interconnected to participate in distributed simulations. Currently, DSIL has interconnections among the following NASA centers: Johnson Space Center (JSC), Kennedy Space Center (KSC), Marshall Space Flight Center (MSFC) and Jet Propulsion Laboratory (JPL). Through interconnections and interactions among different NASA centers, critical resources and data can be shared, while independent simulations can be performed simultaneously at different NASA locations, to effectively utilize the simulation and emulation capabilities at each center. Furthermore, the development of DSIL can maximally leverage the existing project simulation and testing plans. In this work, we describe the specific role and development activities at JPL for Space Communications and Navigation Network (SCaN) simulator using the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) tool to simulate communications effects among mission assets. Using MACHETE, different space network configurations among spacecrafts and ground systems of various parameter sets can be simulated. Data that is necessary for tracking, navigation, and guidance of spacecrafts such as Crew Exploration Vehicle (CEV), Crew Launch Vehicle (CLV), and Lunar Relay Satellite (LRS) and orbit calculation data are disseminated to different NASA centers and updated periodically using the High Level Architecture (HLA). In

  18. Modeling a Large Data Acquisition Network in a Simulation Framework

    CERN Document Server

    Colombo, Tommaso; The ATLAS collaboration

    2015-01-01

    The ATLAS detector at CERN records particle collision “events” delivered by the Large Hadron Collider. Its data-acquisition system is a distributed software system that identifies, selects, and stores interesting events in near real-time, with an aggregate throughput of several 10 GB/s. It is a distributed software system executed on a farm of roughly 2000 commodity worker nodes communicating via TCP/IP on an Ethernet network. Event data fragments are received from the many detector readout channels and are buffered, collected together, analyzed and either stored permanently or discarded. This system, and data-acquisition systems in general, are sensitive to the latency of the data transfer from the readout buffers to the worker nodes. Challenges affecting this transfer include the many-to-one communication pattern and the inherently bursty nature of the traffic. In this paper we introduce the main performance issues brought about by this workload, focusing in particular on the so-called TCP incast pathol...

  19. Simulation and Noise Analysis of Multimedia Transmission in Optical CDMA Computer Networks

    Directory of Open Access Journals (Sweden)

    Nasaruddin Nasaruddin

    2013-09-01

    Full Text Available This paper simulates and analyzes noise of multimedia transmission in a flexible optical code division multiple access (OCDMA computer network with different quality of service (QoS requirements. To achieve multimedia transmission in OCDMA, we have proposed strict variable-weight optical orthogonal codes (VW-OOCs, which can guarantee the smallest correlation value of one by the optimal design. In developing multimedia transmission for computer network, a simulation tool is essential in analyzing the effectiveness of various transmissions of services. In this paper, implementation models are proposed to analyze the multimedia transmission in the representative of OCDMA computer networks by using MATLAB simulink tools. Simulation results of the models are discussed including spectrums outputs of transmitted signals, superimposed signals, received signals, and eye diagrams with and without noise. Using the proposed models, multimedia OCDMA computer network using the strict VW-OOC is practically evaluated. Furthermore, system performance is also evaluated by considering avalanche photodiode (APD noise and thermal noise. The results show that the system performance depends on code weight, received laser power, APD noise, and thermal noise which should be considered as important parameters to design and implement multimedia transmission in OCDMA computer networks.

  20. An enhanced simulated annealing routing algorithm for semi-diagonal torus network

    Science.gov (United States)

    Adzhar, Noraziah; Salleh, Shaharuddin

    2017-09-01

    Multiprocessor is another great technology that helps in advancing human civilization due to high demands for solving complex problems. A multiprocessing system can have a lot of replicated processor-memory pairs (henceforth regard as net) or also called as processing nodes. Each of these nodes is connected to each other through interconnection networks and passes message using a standard message passing mechanism. In this paper, we present a routing algorithm based on enhanced simulated annealing technique to provide the connection between nodes in a semi-diagonal torus (SD-Torus) network. This network is both symmetric and regular; thus, make it very beneficial in the implementation process. The main objective is to maximize the number of established connection between nodes in this SD-Torus network. In order to achieve this objective, each node must be connected in its shortest way as possible. We start our algorithm by designing shortest path algorithm based on Dijkstra’s method. While this algorithm guarantees to find the shortest path for each single net, if it exists, each routed net will form obstacle for later paths. This increases the complexity to route later nets and makes routing longer than optimal, or sometimes impossible to complete. The solution is further refined by re-routing all nets in different orders using simulated annealing method. Through simulation program, our proposed algorithm succeeded in performing complete routing up to 81 nodes with 40 nets in 9×9 SD-Torus network size.

  1. Unified-theory-of-reinforcement neural networks do not simulate the blocking effect.

    Science.gov (United States)

    Calvin, Nicholas T; J McDowell, J

    2015-11-01

    For the last 20 years the unified theory of reinforcement (Donahoe et al., 1993) has been used to develop computer simulations to evaluate its plausibility as an account for behavior. The unified theory of reinforcement states that operant and respondent learning occurs via the same neural mechanisms. As part of a larger project to evaluate the operant behavior predicted by the theory, this project was the first replication of neural network models based on the unified theory of reinforcement. In the process of replicating these neural network models it became apparent that a previously published finding, namely, that the networks simulate the blocking phenomenon (Donahoe et al., 1993), was a misinterpretation of the data. We show that the apparent blocking produced by these networks is an artifact of the inability of these networks to generate the same conditioned response to multiple stimuli. The piecemeal approach to evaluate the unified theory of reinforcement via simulation is critiqued and alternatives are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Simulation and Noise Analysis of Multimedia Transmission in Optical CDMA Computer Networks

    Directory of Open Access Journals (Sweden)

    Nasaruddin

    2009-11-01

    Full Text Available This paper simulates and analyzes noise of multimedia transmission in a flexible optical code division multiple access (OCDMA computer network with different quality of service (QoS requirements. To achieve multimedia transmission in OCDMA, we have proposed strict variable-weight optical orthogonal codes (VW-OOCs, which can guarantee the smallest correlation value of one by the optimal design. In developing multimedia transmission for computer network, a simulation tool is essential in analyzing the effectiveness of various transmissions of services. In this paper, implementation models are proposed to analyze the multimedia transmission in the representative of OCDMA computer networks by using MATLAB simulink tools. Simulation results of the models are discussed including spectrums outputs of transmitted signals, superimposed signals, received signals, and eye diagrams with and without noise. Using the proposed models, multimedia OCDMA computer network using the strict VW-OOC is practically evaluated. Furthermore, system performance is also evaluated by considering avalanche photodiode (APD noise and thermal noise. The results show that the system performance depends on code weight, received laser power, APD noise, and thermal noise which should be considered as important parameters to design and implement multimedia transmission in OCDMA computer networks.

  3. Methodology for Simulation and Analysis of Complex Adaptive Supply Network Structure and Dynamics Using Information Theory

    Directory of Open Access Journals (Sweden)

    Joshua Rodewald

    2016-10-01

    Full Text Available Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.

  4. Lattice Boltzmann simulation of fluid flow in fracture networks with rough, self-affine surfaces.

    Science.gov (United States)

    Madadi, Mahyar; Sahimi, Muhammad

    2003-02-01

    Using the lattice Boltzmann method, we study fluid flow in a two-dimensional (2D) model of fracture network of rock. Each fracture in a square network is represented by a 2D channel with rough, self-affine internal surfaces. Various parameters of the model, such as the connectivity and the apertures of the fractures, the roughness profile of their surface, as well as the Reynolds number for flow of the fluid, are systematically varied in order to assess their effect on the effective permeability of the fracture network. The distribution of the fractures' apertures is approximated well by a log-normal distribution, which is consistent with experimental data. Due to the roughness of the fractures' surfaces, and the finite size of the networks that can be used in the simulations, the fracture network is anisotropic. The anisotropy increases as the connectivity of the network decreases and approaches the percolation threshold. The effective permeability K of the network follows the power law K approximately (beta), where is the average aperture of the fractures in the network and the exponent beta may depend on the roughness exponent. A crossover from linear to nonlinear flow regime is obtained at a Reynolds number Re approximately O(1), but the precise numerical value of the crossover Re depends on the roughness of the fractures' surfaces.

  5. Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks.

    Science.gov (United States)

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime

    2016-01-01

    It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a "signature" of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.

  6. Simulation and evaluation of urban rail transit network based on multi-agent approach

    Directory of Open Access Journals (Sweden)

    Xiangming Yao

    2013-03-01

    Full Text Available Purpose: Urban rail transit is a complex and dynamic system, which is difficult to be described in a global mathematical model for its scale and interaction. In order to analyze the spatial and temporal characteristics of passenger flow distribution and evaluate the effectiveness of transportation strategies, a new and comprehensive method depicted such dynamic system should be given. This study therefore aims at using simulation approach to solve this problem for subway network. Design/methodology/approach: In this thesis a simulation model based on multi-agent approach has been proposed, which is a well suited method to design complex systems. The model includes the specificities of passengers’ travelling behaviors and takes into account of interactions between travelers and trains. Findings: Research limitations/implications: We developed an urban rail transit simulation tool for verification of the validity and accuracy of this model, using real passenger flow data of Beijing subway network to take a case study, results show that our simulation tool can be used to analyze the characteristic of passenger flow distribution and evaluate operation strategies well. Practical implications: The main implications of this work are to provide decision support for traffic management, making train operation plan and dispatching measures in emergency. Originality/value: A new and comprehensive method to analyze and evaluate subway network is presented, accuracy and computational efficiency of the model has been confirmed and meet with the actual needs for large-scale network.

  7. Distributed Particle Swarm Optimization and Simulated Annealing for Energy-efficient Coverage in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dao-Wei Bi

    2007-05-01

    Full Text Available The limited energy supply of wireless sensor networks poses a great challenge for the deployment of wireless sensor nodes. In this paper, we focus on energy-efficient coverage with distributed particle swarm optimization and simulated annealing. First, the energy-efficient coverage problem is formulated with sensing coverage and energy consumption models. We consider the network composed of stationary and mobile nodes. Second, coverage and energy metrics are presented to evaluate the coverage rate and energy consumption of a wireless sensor network, where a grid exclusion algorithm extracts the coverage state and Dijkstra’s algorithm calculates the lowest cost path for communication. Then, a hybrid algorithm optimizes the energy consumption, in which particle swarm optimization and simulated annealing are combined to find the optimal deployment solution in a distributed manner. Simulated annealing is performed on multiple wireless sensor nodes, results of which are employed to correct the local and global best solution of particle swarm optimization. Simulations of wireless sensor node deployment verify that coverage performance can be guaranteed, energy consumption of communication is conserved after deployment optimization and the optimization performance is boosted by the distributed algorithm. Moreover, it is demonstrated that energy efficiency of wireless sensor networks is enhanced by the proposed optimization algorithm in target tracking applications.

  8. Visual NNet: An Educational ANN's Simulation Environment Reusing Matlab Neural Networks Toolbox

    Science.gov (United States)

    Garcia-Roselló, Emilio; González-Dacosta, Jacinto; Lado, Maria J.; Méndez, Arturo J.; Garcia Pérez-Schofield, Baltasar; Ferrer, Fátima

    2011-01-01

    Artificial Neural Networks (ANN's) are nowadays a common subject in different curricula of graduate and postgraduate studies. Due to the complex algorithms involved and the dynamic nature of ANN's, simulation software has been commonly used to teach this subject. This software has usually been developed specifically for learning purposes, because…

  9. Simulation-Based Dynamic Passenger Flow Assignment Modelling for a Schedule-Based Transit Network

    Directory of Open Access Journals (Sweden)

    Xiangming Yao

    2017-01-01

    Full Text Available The online operation management and the offline policy evaluation in complex transit networks require an effective dynamic traffic assignment (DTA method that can capture the temporal-spatial nature of traffic flows. The objective of this work is to propose a simulation-based dynamic passenger assignment framework and models for such applications in the context of schedule-based rail transit systems. In the simulation framework, travellers are regarded as individual agents who are able to obtain complete information on the current traffic conditions. A combined route selection model integrated with pretrip route selection and entrip route switch is established for achieving the dynamic network flow equilibrium status. The train agent is operated strictly with the timetable and its capacity limitation is considered. A continuous time-driven simulator based on the proposed framework and models is developed, whose performance is illustrated through a large-scale network of Beijing subway. The results indicate that more than 0.8 million individual passengers and thousands of trains can be simulated simultaneously at a speed ten times faster than real time. This study provides an efficient approach to analyze the dynamic demand-supply relationship for large schedule-based transit networks.

  10. Simulation of emotions of agents in virtual environments using neural networks

    NARCIS (Netherlands)

    van Kesteren, A.-J.; van Kesteren, A.J.; op den Akker, Hendrikus J.A.; Poel, Mannes; Jokinen, K.; Heylen, Dirk K.J.; Nijholt, Antinus

    2000-01-01

    A distributed architecture for a system simulating the emotional state of an agent acting in a virtual environment is presented. The system is an implementation of an event appraisal model of emotional behaviour and uses neural networks to learn how the emotional state should be influenced by the

  11. Neural networks simulation of a discrete model of continious effects of irrelevant stimuli

    NARCIS (Netherlands)

    Molenaar, P.C.M.

    1990-01-01

    Presents a general simulation method based on minimal neural network representations of nonmathematical, structural models of information processes. The time-dependent behavior of each component in a given structural model is represented by a simple, noncommittal equation that does not affect the

  12. Use of a neural network to extract a missile flight model for simulation purposes

    Science.gov (United States)

    Pascale, Danny; Volckaert, Guy

    1996-03-01

    A neural network is used to extract the flight model of guided, short to medium range, tripod and shoulder-fired missile systems which is then integrated into a training simulator. The simulator uses injected video to replace the optical sight and is fitted with a multi-axis positioning system which senses the gunner's movement. The movement creates an image shift and affects the input data to the missile control algorithm. Accurate flight dynamics are a key to efficient training, particularly in the case of closed loop guided systems. However, flight model data is not always available, either because it is proprietary, or because it is too complex to embed in a real time simulator. A solution is to reverse engineer the flight model by analyzing the missile's response when submitted to typical input conditions. Training data can be extracted from either recorded video or from a combination of weapon and missile positioning data. The video camera can be located either on the weapon or attached to a through-sight adapter. No knowledge of the missile flight transfer function is used in the process. The data is fed to a three-layer back-propagation type neural network. The network is configured within a standard spreadsheet application and is optimized with the built-in solver functions. The structure of the network, the selected inputs and outputs, as well as training data, output data after training, and output data when embedded in the simulator are presented.

  13. A constant-time kinetic Monte Carlo algorithm for simulation of large biochemical reaction networks

    Science.gov (United States)

    Slepoy, Alexander; Thompson, Aidan P.; Plimpton, Steven J.

    2008-05-01

    The time evolution of species concentrations in biochemical reaction networks is often modeled using the stochastic simulation algorithm (SSA) [Gillespie, J. Phys. Chem. 81, 2340 (1977)]. The computational cost of the original SSA scaled linearly with the number of reactions in the network. Gibson and Bruck developed a logarithmic scaling version of the SSA which uses a priority queue or binary tree for more efficient reaction selection [Gibson and Bruck, J. Phys. Chem. A 104, 1876 (2000)]. More generally, this problem is one of dynamic discrete random variate generation which finds many uses in kinetic Monte Carlo and discrete event simulation. We present here a constant-time algorithm, whose cost is independent of the number of reactions, enabled by a slightly more complex underlying data structure. While applicable to kinetic Monte Carlo simulations in general, we describe the algorithm in the context of biochemical simulations and demonstrate its competitive performance on small- and medium-size networks, as well as its superior constant-time performance on very large networks, which are becoming necessary to represent the increasing complexity of biochemical data for pathways that mediate cell function.

  14. A hybrid Genetic and Simulated Annealing Algorithm for Chordal Ring implementation in large-scale networks

    DEFF Research Database (Denmark)

    Riaz, M. Tahir; Gutierrez Lopez, Jose Manuel; Pedersen, Jens Myrup

    2011-01-01

    of the networks. There have been many use of evolutionary algorithms to solve the problems which are in combinatory complexity nature, and extremely hard to solve by exact approaches. Both Genetic and Simulated annealing algorithms are similar in using controlled stochastic method to search the solution...

  15. Largenet2: an object-oriented programming library for simulating large adaptive networks.

    Science.gov (United States)

    Zschaler, Gerd; Gross, Thilo

    2013-01-15

    The largenet2 C++ library provides an infrastructure for the simulation of large dynamic and adaptive networks with discrete node and link states. The library is released as free software. It is available at http://biond.github.com/largenet2. Largenet2 is licensed under the Creative Commons Attribution-NonCommercial 3.0 Unported License. gerd@biond.org

  16. Integrating atomistic molecular dynamics simulations, experiments, and network analysis to study protein dynamics

    DEFF Research Database (Denmark)

    Papaleo, Elena

    2015-01-01

    that we observe and the functional properties of these important cellular machines. To make progresses in this direction, we need to improve the physical models used to describe proteins and solvent in molecular dynamics, as well as to strengthen the integration of experiments and simulations to overcome...... simulations with attention to the effects that can be propagated over long distances and are often associated to important biological functions. In this context, approaches inspired by network analysis can make an important contribution to the analysis of molecular dynamics simulations....

  17. An Expert System And Simulation Approach For Sensor Management & Control In A Distributed Surveillance Network

    Science.gov (United States)

    Leon, Barbara D.; Heller, Paul R.

    1987-05-01

    A surveillance network is a group of multiplatform sensors cooperating to improve network performance. Network control is distributed as a measure to decrease vulnerability to enemy threat. The network may contain diverse sensor types such as radar, ESM (Electronic Support Measures), IRST (Infrared search and track) and E-0 (Electro-Optical). Each platform may contain a single sensor or suite of sensors. In a surveillance network it is desirable to control sensors to make the overall system more effective. This problem has come to be known as sensor management and control (SM&C). Two major facets of network performance are surveillance and survivability. In a netted environment, surveillance can be enhanced if information from all sensors is combined and sensor operating conditions are controlled to provide a synergistic effect. In contrast, when survivability is the main concern for the network, the best operating status for all sensors would be passive or off. Of course, improving survivability tends to degrade surveillance. Hence, the objective of SM&C is to optimize surveillance and survivability of the network. Too voluminous data of various formats and the quick response time are two characteristics of this problem which make it an ideal application for Artificial Intelligence. A solution to the SM&C problem, presented as a computer simulation, will be presented in this paper. The simulation is a hybrid production written in LISP and FORTRAN. It combines the latest conventional computer programming methods with Artificial Intelligence techniques to produce a flexible state-of-the-art tool to evaluate network performance. The event-driven simulation contains environment models coupled with an expert system. These environment models include sensor (track-while-scan and agile beam) and target models, local tracking, and system tracking. These models are used to generate the environment for the sensor management and control expert system. The expert system

  18. Cellular neural network modelling of soft tissue dynamics for surgical simulation.

    Science.gov (United States)

    Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-07-20

    Currently, the mechanical dynamics of soft tissue deformation is achieved by numerical time integrations such as the explicit or implicit integration; however, the explicit integration is stable only under a small time step, whereas the implicit integration is computationally expensive in spite of the accommodation of a large time step. This paper presents a cellular neural network method for stable simulation of soft tissue deformation dynamics. The non-rigid motion equation is formulated as a cellular neural network with local connectivity of cells, and thus the dynamics of soft tissue deformation is transformed into the neural dynamics of the cellular neural network. Results show that the proposed method can achieve good accuracy at a small time step. It still remains stable at a large time step, while maintaining the computational efficiency of the explicit integration. The proposed method can achieve stable soft tissue deformation with efficiency of explicit integration for surgical simulation.

  19. Simulating Local Area Network Protocols with the General Purpose Simulation System (GPSS)

    Science.gov (United States)

    1990-03-01

    because of insufficient buffer capacity ), and the mean transmitter and receiver utilisations. Figure 11 shows that the buffer overflow probability is...made. (1) Infinite buffer capacity at each station (2) 1 M bit/s data rate (3) Network of 50 stations equally spaced over a 2000 m length ring (4

  20. Fractional Diffusion Emulates a Human Mobility Network during a Simulated Disease Outbreak

    Directory of Open Access Journals (Sweden)

    Kyle B. Gustafson

    2017-04-01

    Full Text Available Mobility networks facilitate the growth of populations, the success of invasive species, and the spread of communicable diseases among social animals, including humans. Disease control and elimination efforts, especially during an outbreak, can be optimized by numerical modeling of disease dynamics on transport networks. This is especially true when incidence data from an emerging epidemic is sparse and unreliable. However, mobility networks can be complex, challenging to characterize, and expensive to simulate with agent-based models. We therefore studied a parsimonious model for spatiotemporal disease dynamics based on a fractional diffusion equation. We implemented new stochastic simulations of a prototypical influenza-like infection spreading through the United States' highly-connected air travel network. We found that the national-averaged infected fraction during an outbreak is accurately reproduced by a space-fractional diffusion equation consistent with the connectivity of airports. Fractional diffusion therefore seems to be a better model of network outbreak dynamics than a diffusive model. Our fractional reaction-diffusion method and the result could be extended to other mobility networks in a variety of applications for population dynamics.

  1. Building Model for the University of Mosul Computer Network Using OPNET Simulator

    Directory of Open Access Journals (Sweden)

    Modhar Modhar A. Hammoudi

    2013-04-01

    Full Text Available This paper aims at establishing a model in OPNET (Optimized Network Engineering Tool simulator for the University of Mosul computer network. The proposed network model was made up of two routers (Cisco 2600, core switch (Cisco6509, two servers, ip 32 cloud and 37 VLANs. These VLANs were connected to the core switch using fiber optic cables (1000BaseX. Three applications were added to test the network model. These applications were FTP (File Transfer Protocol, HTTP (Hyper Text Transfer Protocol and VoIP (Voice over Internet Protocol. The results showed that the proposed model had a positive efficiency on designing and managing the targeted network and can be used to view the data flow in it. Also, the simulation results showed that the maximum number of VoIP service users could be raised upto 5000 users when working under IP Telephony. This means that the ability to utilize VoIP service in this network can be maintained and is better when subjected to IP telephony scheme.

  2. Hyper-Spectral Networking Concept of Operations and Future Air Traffic Management Simulations

    Science.gov (United States)

    Davis, Paul; Boisvert, Benjamin

    2017-01-01

    The NASA sponsored Hyper-Spectral Communications and Networking for Air Traffic Management (ATM) (HSCNA) project is conducting research to improve the operational efficiency of the future National Airspace System (NAS) through diverse and secure multi-band, multi-mode, and millimeter-wave (mmWave) wireless links. Worldwide growth of air transportation and the coming of unmanned aircraft systems (UAS) will increase air traffic density and complexity. Safe coordination of aircraft will require more capable technologies for communications, navigation, and surveillance (CNS). The HSCNA project will provide a foundation for technology and operational concepts to accommodate a significantly greater number of networked aircraft. This paper describes two of the HSCNA projects technical challenges. The first technical challenge is to develop a multi-band networking concept of operations (ConOps) for use in multiple phases of flight and all communication link types. This ConOps will integrate the advanced technologies explored by the HSCNA project and future operational concepts into a harmonized vision of future NAS communications and networking. The second technical challenge discussed is to conduct simulations of future ATM operations using multi-bandmulti-mode networking and technologies. Large-scale simulations will assess the impact, compared to todays system, of the new and integrated networks and technologies under future air traffic demand.

  3. Design, Development, and Testing of a UAV Hardware-in-the-Loop Testbed for Aviation and Airspace Prognostics Research

    Science.gov (United States)

    Kulkarni, Chetan; Teubert, Chris; Gorospe, George; Burgett, Drew; Quach, Cuong C.; Hogge, Edward

    2016-01-01

    The airspace is becoming more and more complicated, and will continue to do so in the future with the integration of Unmanned Aerial Vehicles (UAVs), autonomy, spacecraft, other forms of aviation technology into the airspace. The new technology and complexity increases the importance and difficulty of safety assurance. Additionally, testing new technologies on complex aviation systems & systems of systems can be very difficult, expensive, and sometimes unsafe in real life scenarios. Prognostic methodology provides an estimate of the health and risks of a component, vehicle, or airspace and knowledge of how that will change over time. That measure is especially useful in safety determination, mission planning, and maintenance scheduling. The developed testbed will be used to validate prediction algorithms for the real-time safety monitoring of the National Airspace System (NAS) and the prediction of unsafe events. The framework injects flight related anomalies related to ground systems, routing, airport congestion, etc. to test and verify algorithms for NAS safety. In our research work, we develop a live, distributed, hardware-in-the-loop testbed for aviation and airspace prognostics along with exploring further research possibilities to verify and validate future algorithms for NAS safety. The testbed integrates virtual aircraft using the X-Plane simulator and X-PlaneConnect toolbox, UAVs using onboard sensors and cellular communications, and hardware in the loop components. In addition, the testbed includes an additional research framework to support and simplify future research activities. It enables safe, accurate, and inexpensive experimentation and research into airspace and vehicle prognosis that would not have been possible otherwise. This paper describes the design, development, and testing of this system. Software reliability, safety and latency are some of the critical design considerations in development of the testbed. Integration of HITL elements in

  4. Design and Development of a 200-kW Turbo-Electric Distributed Propulsion Testbed

    Science.gov (United States)

    Papathakis, Kurt V.; Kloesel, Kurt J.; Lin, Yohan; Clarke, Sean; Ediger, Jacob J.; Ginn, Starr

    2016-01-01

    The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC) (Edwards, California) is developing a Hybrid-Electric Integrated Systems Testbed (HEIST) Testbed as part of the HEIST Project, to study power management and transition complexities, modular architectures, and flight control laws for turbo-electric distributed propulsion technologies using representative hardware and piloted simulations. Capabilities are being developed to assess the flight readiness of hybrid electric and distributed electric vehicle architectures. Additionally, NASA will leverage experience gained and assets developed from HEIST to assist in flight-test proposal development, flight-test vehicle design, and evaluation of hybrid electric and distributed electric concept vehicles for flight safety. The HEIST test equipment will include three trailers supporting a distributed electric propulsion wing, a battery system and turbogenerator, dynamometers, and supporting power and communication infrastructure, all connected to the AFRC Core simulation. Plans call for 18 high performance electric motors that will be powered by batteries and the turbogenerator, and commanded by a piloted simulation. Flight control algorithms will be developed on the turbo-electric distributed propulsion system.

  5. Architecture for an integrated real-time air combat and sensor network simulation

    Science.gov (United States)

    Criswell, Evans A.; Rushing, John; Lin, Hong; Graves, Sara

    2007-04-01

    An architecture for an integrated air combat and sensor network simulation is presented. The architecture integrates two components: a parallel real-time sensor fusion and target tracking simulation, and an air combat simulation. By integrating these two simulations, it becomes possible to experiment with scenarios in which one or both sides in a battle have very large numbers of primitive passive sensors, and to assess the likely effects of those sensors on the outcome of the battle. Modern Air Power is a real-time theater-level air combat simulation that is currently being used as a part of the USAF Air and Space Basic Course (ASBC). The simulation includes a variety of scenarios from the Vietnam war to the present day, and also includes several hypothetical future scenarios. Modern Air Power includes a scenario editor, an order of battle editor, and full AI customization features that make it possible to quickly construct scenarios for any conflict of interest. The scenario editor makes it possible to place a wide variety of sensors including both high fidelity sensors such as radars, and primitive passive sensors that provide only very limited information. The parallel real-time sensor network simulation is capable of handling very large numbers of sensors on a computing cluster of modest size. It can fuse information provided by disparate sensors to detect and track targets, and produce target tracks.

  6. Simulated Annealing Technique for Routing in a Rectangular Mesh Network

    Directory of Open Access Journals (Sweden)

    Noraziah Adzhar

    2014-01-01

    Full Text Available In the process of automatic design for printed circuit boards (PCBs, the phase following cell placement is routing. On the other hand, routing process is a notoriously difficult problem, and even the simplest routing problem which consists of a set of two-pin nets is known to be NP-complete. In this research, our routing region is first tessellated into a uniform Nx×Ny array of square cells. The ultimate goal for a routing problem is to achieve complete automatic routing with minimal need for any manual intervention. Therefore, shortest path for all connections needs to be established. While classical Dijkstra’s algorithm guarantees to find shortest path for a single net, each routed net will form obstacles for later paths. This will add complexities to route later nets and make its routing longer than the optimal path or sometimes impossible to complete. Today’s sequential routing often applies heuristic method to further refine the solution. Through this process, all nets will be rerouted in different order to improve the quality of routing. Because of this, we are motivated to apply simulated annealing, one of the metaheuristic methods to our routing model to produce better candidates of sequence.

  7. Teleradiology system analysis using a discrete event-driven block-oriented network simulator

    Science.gov (United States)

    Stewart, Brent K.; Dwyer, Samuel J., III

    1992-07-01

    Performance evaluation and trade-off analysis are the central issues in the design of communication networks. Simulation plays an important role in computer-aided design and analysis of communication networks and related systems, allowing testing of numerous architectural configurations and fault scenarios. We are using the Block Oriented Network Simulator (BONeS, Comdisco, Foster City, CA) software package to perform discrete, event- driven Monte Carlo simulations in capacity planning, tradeoff analysis and evaluation of alternate architectures for a high-speed, high-resolution teleradiology project. A queuing network model of the teleradiology system has been devise, simulations executed and results analyzed. The wide area network link uses a switched, dial-up N X 56 kbps inverting multiplexer where the number of digital voice-grade lines (N) can vary from one (DS-0) through 24 (DS-1). The proposed goal of such a system is 200 films (2048 X 2048 X 12-bit) transferred between a remote and local site in an eight hour period with a mean delay time less than five minutes. It is found that: (1) the DS-1 service limit is around 100 films per eight hour period with a mean delay time of 412 +/- 39 seconds, short of the goal stipulated above; (2) compressed video teleconferencing can be run simultaneously with image data transfer over the DS-1 wide area network link without impacting the performance of the described teleradiology system; (3) there is little sense in upgrading to a higher bandwidth WAN link like DS-2 or DS-3 for the current system; and (4) the goal of transmitting 200 films in an eight hour period with a mean delay time less than five minutes can be achieved simply if the laser printer interface is updated from the current DR-11W interface to a much faster SCSI interface.

  8. Simulating microinjection experiments in a novel model of the rat sleep-wake regulatory network.

    Science.gov (United States)

    Diniz Behn, Cecilia G; Booth, Victoria

    2010-04-01

    This study presents a novel mathematical modeling framework that is uniquely suited to investigating the structure and dynamics of the sleep-wake regulatory network in the brain stem and hypothalamus. It is based on a population firing rate model formalism that is modified to explicitly include concentration levels of neurotransmitters released to postsynaptic populations. Using this framework, interactions among primary brain stem and hypothalamic neuronal nuclei involved in rat sleep-wake regulation are modeled. The model network captures realistic rat polyphasic sleep-wake behavior consisting of wake, rapid eye movement (REM) sleep, and non-REM (NREM) sleep states. Network dynamics include a cyclic pattern of NREM sleep, REM sleep, and wake states that is disrupted by simulated variability of neurotransmitter release and external noise to the network. Explicit modeling of neurotransmitter concentrations allows for simulations of microinjections of neurotransmitter agonists and antagonists into a key wake-promoting population, the locus coeruleus (LC). Effects of these simulated microinjections on sleep-wake states are tracked and compared with experimental observations. Agonist/antagonist pairs, which are presumed to have opposing effects on LC activity, do not generally induce opposing effects on sleep-wake patterning because of multiple mechanisms for LC activation in the network. Also, different agents, which are presumed to have parallel effects on LC activity, do not induce parallel effects on sleep-wake patterning because of differences in the state dependence or independence of agonist and antagonist action. These simulation results highlight the utility of formal mathematical modeling for constraining conceptual models of the sleep-wake regulatory network.

  9. An Integrated Testbed for Cooperative Perception with Heterogeneous Mobile and Static Sensors

    Directory of Open Access Journals (Sweden)

    Aníbal Ollero

    2011-12-01

    Full Text Available Cooperation among devices with different sensing, computing and communication capabilities provides interesting possibilities in a growing number of problems and applications including domotics (domestic robotics, environmental monitoring or intelligent cities, among others. Despite the increasing interest in academic and industrial communities, experimental tools for evaluation and comparison of cooperative algorithms for such heterogeneous technologies are still very scarce. This paper presents a remote testbed with mobile robots and Wireless Sensor Networks (WSN equipped with a set of low-cost off-the-shelf sensors, commonly used in cooperative perception research and applications, that present high degree of heterogeneity in their technology, sensed magnitudes, features, output bandwidth, interfaces and power consumption, among others. Its open and modular architecture allows tight integration and interoperability between mobile robots and WSN through a bidirectional protocol that enables full interaction. Moreover, the integration of standard tools and interfaces increases usability, allowing an easy extension to new hardware and software components and the reuse of code. Different levels of decentralization are considered, supporting from totally distributed to centralized approaches. Developed for the EU-funded Cooperating Objects Network of Excellence (CONET and currently available at the School of Engineering of Seville (Spain, the testbed provides full remote control through the Internet. Numerous experiments have been performed, some of which are described in the paper.

  10. An integrated testbed for cooperative perception with heterogeneous mobile and static sensors.

    Science.gov (United States)

    Jiménez-González, Adrián; Martínez-De Dios, José Ramiro; Ollero, Aníbal

    2011-01-01

    Cooperation among devices with different sensing, computing and communication capabilities provides interesting possibilities in a growing number of problems and applications including domotics (domestic robotics), environmental monitoring or intelligent cities, among others. Despite the increasing interest in academic and industrial communities, experimental tools for evaluation and comparison of cooperative algorithms for such heterogeneous technologies are still very scarce. This paper presents a remote testbed with mobile robots and Wireless Sensor Networks (WSN) equipped with a set of low-cost off-the-shelf sensors, commonly used in cooperative perception research and applications, that present high degree of heterogeneity in their technology, sensed magnitudes, features, output bandwidth, interfaces and power consumption, among others. Its open and modular architecture allows tight integration and interoperability between mobile robots and WSN through a bidirectional protocol that enables full interaction. Moreover, the integration of standard tools and interfaces increases usability, allowing an easy extension to new hardware and software components and the reuse of code. Different levels of decentralization are considered, supporting from totally distributed to centralized approaches. Developed for the EU-funded Cooperating Objects Network of Excellence (CONET) and currently available at the School of Engineering of Seville (Spain), the testbed provides full remote control through the Internet. Numerous experiments have been performed, some of which are described in the paper.

  11. An Integrated Testbed for Cooperative Perception with Heterogeneous Mobile and Static Sensors

    Science.gov (United States)

    Jiménez-González, Adrián; Martínez-De Dios, José Ramiro; Ollero, Aníbal

    2011-01-01

    Cooperation among devices with different sensing, computing and communication capabilities provides interesting possibilities in a growing number of problems and applications including domotics (domestic robotics), environmental monitoring or intelligent cities, among others. Despite the increasing interest in academic and industrial communities, experimental tools for evaluation and comparison of cooperative algorithms for such heterogeneous technologies are still very scarce. This paper presents a remote testbed with mobile robots and Wireless Sensor Networks (WSN) equipped with a set of low-cost off-the-shelf sensors, commonly used in cooperative perception research and applications, that present high degree of heterogeneity in their technology, sensed magnitudes, features, output bandwidth, interfaces and power consumption, among others. Its open and modular architecture allows tight integration and interoperability between mobile robots and WSN through a bidirectional protocol that enables full interaction. Moreover, the integration of standard tools and interfaces increases usability, allowing an easy extension to new hardware and software components and the reuse of code. Different levels of decentralization are considered, supporting from totally distributed to centralized approaches. Developed for the EU-funded Cooperating Objects Network of Excellence (CONET) and currently available at the School of Engineering of Seville (Spain), the testbed provides full remote control through the Internet. Numerous experiments have been performed, some of which are described in the paper. PMID:22247679

  12. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.

    Science.gov (United States)

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.

  13. Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.

    Science.gov (United States)

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2017-01-01

    Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.

  14. Adaptive complementary fuzzy self-recurrent wavelet neural network controller for the electric load simulator system

    Directory of Open Access Journals (Sweden)

    Wang Chao

    2016-03-01

    Full Text Available Due to the complexities existing in the electric load simulator, this article develops a high-performance nonlinear adaptive controller to improve the torque tracking performance of the electric load simulator, which mainly consists of an adaptive fuzzy self-recurrent wavelet neural network controller with variable structure (VSFSWC and a complementary controller. The VSFSWC is clearly and easily used for real-time systems and greatly improves the convergence rate and control precision. The complementary controller is designed to eliminate the effect of the approximation error between the proposed neural network controller and the ideal feedback controller without chattering phenomena. Moreover, adaptive learning laws are derived to guarantee the system stability in the sense of the Lyapunov theory. Finally, the hardware-in-the-loop simulations are carried out to verify the feasibility and effectiveness of the proposed algorithms in different working styles.

  15. Operator Splitting Method for Simulation of Dynamic Flows in Natural Gas Pipeline Networks

    CERN Document Server

    Dyachenko, Sergey A; Chertkov, Michael

    2016-01-01

    We develop an operator splitting method to simulate flows of isothermal compressible natural gas over transmission pipelines. The method solves a system of nonlinear hyperbolic partial differential equations (PDEs) of hydrodynamic type for mass flow and pressure on a metric graph, where turbulent losses of momentum are modeled by phenomenological Darcy-Weisbach friction. Mass flow balance is maintained through the boundary conditions at the network nodes, where natural gas is injected or withdrawn from the system. Gas flow through the network is controlled by compressors boosting pressure at the inlet of the adjoint pipe. Our operator splitting numerical scheme is unconditionally stable and it is second order accurate in space and time. The scheme is explicit, and it is formulated to work with general networks with loops. We test the scheme over range of regimes and network configurations, also comparing its performance with performance of two other state of the art implicit schemes.

  16. Simulation of Missile Autopilot with Two-Rate Hybrid Neural Network System

    Directory of Open Access Journals (Sweden)

    ASTROV, I.

    2007-04-01

    Full Text Available This paper proposes a two-rate hybrid neural network system, which consists of two artificial neural network subsystems. These neural network subsystems are used as the dynamic subsystems controllers.1 This is because such neuromorphic controllers are especially suitable to control complex systems. An illustrative example - two-rate neural network hybrid control of decomposed stochastic model of a rigid guided missile over different operating conditions - was carried out using the proposed two-rate state-space decomposition technique. This example demonstrates that this research technique results in simplified low-order autonomous control subsystems with various speeds of actuation, and shows the quality of the proposed technique. The obtained results show that the control tasks for the autonomous subsystems can be solved more qualitatively than for the original system. The simulation and animation results with use of software package Simulink demonstrate that this research technique would work for real-time stochastic systems.

  17. Operator splitting method for simulation of dynamic flows in natural gas pipeline networks

    Science.gov (United States)

    Dyachenko, Sergey A.; Zlotnik, Anatoly; Korotkevich, Alexander O.; Chertkov, Michael

    2017-12-01

    We develop an operator splitting method to simulate flows of isothermal compressible natural gas over transmission pipelines. The method solves a system of nonlinear hyperbolic partial differential equations (PDEs) of hydrodynamic type for mass flow and pressure on a metric graph, where turbulent losses of momentum are modeled by phenomenological Darcy-Weisbach friction. Mass flow balance is maintained through the boundary conditions at the network nodes, where natural gas is injected or withdrawn from the system. Gas flow through the network is controlled by compressors boosting pressure at the inlet of the adjoint pipe. Our operator splitting numerical scheme is unconditionally stable and it is second order accurate in space and time. The scheme is explicit, and it is formulated to work with general networks with loops. We test the scheme over range of regimes and network configurations, also comparing its performance with performance of two other state of the art implicit schemes.

  18. Simulation of unsteady flow and solute transport in a tidal river network

    Science.gov (United States)

    Zhan, X.

    2003-01-01

    A mathematical model and numerical method for water flow and solute transport in a tidal river network is presented. The tidal river network is defined as a system of open channels of rivers with junctions and cross sections. As an example, the Pearl River in China is represented by a network of 104 channels, 62 nodes, and a total of 330 cross sections with 11 boundary section for one of the applications. The simulations are performed with a supercomputer for seven scenarios of water flow and/or solute transport in the Pearl River, China, with different hydrological and weather conditions. Comparisons with available data are shown. The intention of this study is to summarize previous works and to provide a useful tool for water environmental management in a tidal river network, particularly for the Pearl River, China.

  19. Simulation of Operations in the Underwater Warfare Testbed (UWT)

    NARCIS (Netherlands)

    Keus, D.; Benders, F.P.A.; Fitski, H.J.; Grootendorst, H.J.

    2009-01-01

    Surface vessels and submarines must be able to defend themselves against a torpedo attack. Several studies have shown the benefits of multi-platform and multi-static operations. To facilitate torpedo defence system studies and the development of future tactics, TNO Defence, Security and Safety

  20. How the ownership structures cause epidemics in financial markets: A network-based simulation model

    Science.gov (United States)

    Dastkhan, Hossein; Gharneh, Naser Shams

    2018-02-01

    Analysis of systemic risks and contagions is one of the main challenges of policy makers and researchers in the recent years. Network theory is introduced as a main approach in the modeling and simulation of financial and economic systems. In this paper, a simulation model is introduced based on the ownership network to analyze the contagion and systemic risk events. For this purpose, different network structures with different values for parameters are considered to investigate the stability of the financial system in the presence of different kinds of idiosyncratic and aggregate shocks. The considered network structures include Erdos-Renyi, core-periphery, segregated and power-law networks. Moreover, the results of the proposed model are also calculated for a real ownership network. The results show that the network structure has a significant effect on the probability and the extent of contagion in the financial systems. For each network structure, various values for the parameters results in remarkable differences in the systemic risk measures. The results of real case show that the proposed model is appropriate in the analysis of systemic risk and contagion in financial markets, identification of systemically important firms and estimation of market loss when the initial failures occur. This paper suggests a new direction in the modeling of contagion in the financial markets, in particular that the effects of new kinds of financial exposure are clarified. This paper's idea and analytical results may also be useful for the financial policy makers, portfolio managers and the firms to conduct their investment in the right direction.

  1. Efficient generation of connectivity in neuronal networks from simulator-independent descriptions

    Directory of Open Access Journals (Sweden)

    Mikael eDjurfeldt

    2014-04-01

    Full Text Available Simulator-independent descriptions of connectivity in neuronal networks promise greater ease of model sharing, improved reproducibility of simulation results, and reduced programming effort for computational neuroscientists. However, until now, enabling the use of such descriptions in a given simulator in a computationally efficient way has entailed considerable work for simulator developers, which must be repeated for each new connectivity-generating library that is developed.We have developed a generic connection generator interface that provides a standard way to connect a connectivity-generating library to a simulator, such that one library can easily be replaced by another, according to the modeller's needs. We have used the connection generator interface to connect C++ and Python implementations of the connection-set algebra to the NEST simulator. We also demonstrate how the simulator-independent modelling framework PyNN can transparently take advantage of this, passing a connection description through to the simulator layer for rapid processing in C++ where a simulator supports the connection generator interface and falling-back to slower iteration in Python otherwise. A set of benchmarks demonstrates the good performance of the interface.

  2. Flow MRI simulation in complex 3D geometries: Application to the cerebral venous network.

    Science.gov (United States)

    Fortin, Alexandre; Salmon, Stéphanie; Baruthio, Joseph; Delbany, Maya; Durand, Emmanuel

    2018-02-05

    Develop and evaluate a complete tool to include 3D fluid flows in MRI simulation, leveraging from existing software. Simulation of MR spin flow motion is of high interest in the study of flow artifacts and angiography. However, at present, only a few simulators include this option and most are restricted to static tissue imaging. An extension of JEMRIS, one of the most advanced high performance open-source simulation platforms to date, was developed. The implementation of a Lagrangian description of the flow allows simulating any MR experiment, including both static tissues and complex flow data from computational fluid dynamics. Simulations of simple flow models are compared with real experiments on a physical flow phantom. A realistic simulation of 3D flow MRI on the cerebral venous network is also carried out. Simulations and real experiments are in good agreement. The generality of the framework is illustrated in 2D and 3D with some common flow artifacts (misregistration and inflow enhancement) and with the three main angiographic techniques: phase contrast velocimetry (PC), time-of-flight, and contrast-enhanced imaging MRA. The framework provides a versatile and reusable tool for the simulation of any MRI experiment including physiological fluids and arbitrarily complex flow motion. © 2018 International Society for Magnetic Resonance in Medicine.

  3. Numerical Simulation of Fluid Flow through Fractal-Based Discrete Fractured Network

    Directory of Open Access Journals (Sweden)

    Wendong Wang

    2018-01-01

    Full Text Available Abstract: In recent years, multi-stage hydraulic fracturing technologies have greatly facilitated the development of unconventional oil and gas resources. However, a quantitative description of the “complexity” of the fracture network created by the hydraulic fracturing is confronted with many unsolved challenges. Given the multiple scales and heterogeneity of the fracture system, this study proposes a “bifurcated fractal” model to quantitatively describe the distribution of induced hydraulic fracture networks. The construction theory is employed to generate hierarchical fracture patterns as a scaled numerical model. With the implementation of discrete fractal-fracture network modeling (DFFN, fluid flow characteristics in bifurcated fractal fracture networks are characterized. The effects of bifurcated fracture length, bifurcated tendency, and number of bifurcation stages are examined. A field example of the fractured horizontal well is introduced to calibrate the accuracy of the flow model. The proposed model can provide a more realistic representation of complex fracture networks around a fractured horizontal well, and offer the way to quantify the “complexity” of the fracture network in shale reservoirs. The simulation results indicate that the geometry of the bifurcated fractal fracture network model has a significant impact on production performance in the tight reservoir, and enhancing connectivity of each bifurcate fracture is the key to improve the stimulation performance. In practice, this work provides a novel and efficient workflow for complex fracture characterization and production prediction in naturally-fractured reservoirs of multi-stage fractured horizontal wells.

  4. Unified pipe network method for simulation of water flow in fractured porous rock

    Science.gov (United States)

    Ren, Feng; Ma, Guowei; Wang, Yang; Li, Tuo; Zhu, Hehua

    2017-04-01

    Rock masses are often conceptualized as dual-permeability media containing fractures or fracture networks with high permeability and porous matrix that is less permeable. In order to overcome the difficulties in simulating fluid flow in a highly discontinuous dual-permeability medium, an effective unified pipe network method is developed, which discretizes the dual-permeability rock mass into a virtual pipe network system. It includes fracture pipe networks and matrix pipe networks. They are constructed separately based on equivalent flow models in a representative area or volume by taking the advantage of the orthogonality of the mesh partition. Numerical examples of fluid flow in 2-D and 3-D domain including porous media and fractured porous media are presented to demonstrate the accuracy, robustness, and effectiveness of the proposed unified pipe network method. Results show that the developed method has good performance even with highly distorted mesh. Water recharge into the fractured rock mass with complex fracture network is studied. It has been found in this case that the effect of aperture change on the water recharge rate is more significant in the early stage compared to the fracture density change.

  5. USING THE RANDOM OF QUANTIZATION IN THE SIMULATION OF NETWORKED CONTROL SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. K. Bitiukov

    2014-01-01

    Full Text Available Network control systems using a network channel for communication between the elements. This approach has several advantages: lower installation costs, ease of configuration, ease of diagnostics and maintenance. The use of networks in control systems poses new problems. The network characteristics make the analysis, modeling, and control of networked control systems more complex and challenging. In the simulation must consider the following factors: packet loss, packet random time over the network, the need for location records in a channel simultaneously multiple data packets with sequential transmission. Attempts to account at the same time all of these factors lead to a significant increase in the dimension of the mathematical model and, as a con-sequence, a significant computational challenges. Such models tend to have a wide application in research. However, for engineering calculations required mathematical models of small dimension, but at the same time having sufficient accuracy. Considered the networks channels with random delays and packet loss. Random delay modeled by appropriate distribution the Erlang. The probability of packet loss depends on the arrival rate of data packets in the transmission channel, and the parameters of the distribution Erlang. We propose a model of the channel in the form of a serial connection of discrete elements. Discrete elements produce independents quantization of the input signal. To change the probability of packet loss is proposed to use a random quantization input signal. Obtained a formula to determine the probability of packet loss during transmission.

  6. Laser Metrology in the Micro-Arcsecond Metrology Testbed

    Science.gov (United States)

    An, Xin; Marx, D.; Goullioud, Renaud; Zhao, Feng

    2004-01-01

    The Space Interferometer Mission (SIM), scheduled for launch in 2009, is a space-born visible light stellar interferometer capable of micro-arcsecond-level astrometry. The Micro-Arcsecond Metrology testbed (MAM) is the ground-based testbed that incorporates all the functionalities of SIM minus the telescope, for mission-enabling technology development and verification. MAM employs a laser heterodyne metrology system using the Sub-Aperture Vertex-to-Vertex (SAVV) concept. In this paper, we describe the development and modification of the SAVV metrology launchers and the metrology instrument electronics, precision alignments and pointing control, locating cyclic error sources in the MAM testbed and methods to mitigate the cyclic errors, as well as the performance under the MAM performance metrics.

  7. A vascular image registration method based on network structure and circuit simulation.

    Science.gov (United States)

    Chen, Li; Lian, Yuxi; Guo, Yi; Wang, Yuanyuan; Hatsukami, Thomas S; Pimentel, Kristi; Balu, Niranjan; Yuan, Chun

    2017-05-02

    Image registration is an important research topic in the field of image processing. Applying image registration to vascular image allows multiple images to be strengthened and fused, which has practical value in disease detection, clinical assisted therapy, etc. However, it is hard to register vascular structures with high noise and large difference in an efficient and effective method. Different from common image registration methods based on area or features, which were sensitive to distortion and uncertainty in vascular structure, we proposed a novel registration method based on network structure and circuit simulation. Vessel images were transformed to graph networks and segmented to branches to reduce the calculation complexity. Weighted graph networks were then converted to circuits, in which node voltages of the circuit reflecting the vessel structures were used for node registration. The experiments in the two-dimensional and three-dimensional simulation and clinical image sets showed the success of our proposed method in registration. The proposed vascular image registration method based on network structure and circuit simulation is stable, fault tolerant and efficient, which is a useful complement to the current mainstream image registration methods.

  8. A test-bed modeling study for wave resource assessment

    Science.gov (United States)

    Yang, Z.; Neary, V. S.; Wang, T.; Gunawan, B.; Dallman, A.

    2016-02-01

    Hindcasts from phase-averaged wave models are commonly used to estimate standard statistics used in wave energy resource assessments. However, the research community and wave energy converter industry is lacking a well-documented and consistent modeling approach for conducting these resource assessments at different phases of WEC project development, and at different spatial scales, e.g., from small-scale pilot study to large-scale commercial deployment. Therefore, it is necessary to evaluate current wave model codes, as well as limitations and knowledge gaps for predicting sea states, in order to establish best wave modeling practices, and to identify future research needs to improve wave prediction for resource assessment. This paper presents the first phase of an on-going modeling study to address these concerns. The modeling study is being conducted at a test-bed site off the Central Oregon Coast using two of the most widely-used third-generation wave models - WaveWatchIII and SWAN. A nested-grid modeling approach, with domain dimension ranging from global to regional scales, was used to provide wave spectral boundary condition to a local scale model domain, which has a spatial dimension around 60km by 60km and a grid resolution of 250m - 300m. Model results simulated by WaveWatchIII and SWAN in a structured-grid framework are compared to NOAA wave buoy data for the six wave parameters, including omnidirectional wave power, significant wave height, energy period, spectral width, direction of maximum directionally resolved wave power, and directionality coefficient. Model performance and computational efficiency are evaluated, and the best practices for wave resource assessments are discussed, based on a set of standard error statistics and model run times.

  9. Pre-phase Improvement For Distributed Spectrum Sensing in Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Ying Dai

    2014-09-01

    Full Text Available This paper considers a pre-phase of spectrum sensing in cognitive radio networks (CRNs, which is about how to choose a channel for spectrum sensing. We take the time dimension, spectrum dimension, and spacial dimension into account and propose a sense-in-order model. In this model, each node maintains four states regarding each channel, based on the neighbors’ shared information. We construct a state transition diagram for the four states and design an algorithm for every node to calculate the probability of choosing each channel. Extensive simulation results testify to the performance of our model. In addition, we conduct experiments on the USRP/Gnuradio testbed to prove the main part of the sense-in-order model with directional antennas. Experimental results show that the average success percentage under the settings of the testbed is above 70%.

  10. Simulating urban growth by emphasis on connective routes network (case study: Bojnourd city

    Directory of Open Access Journals (Sweden)

    Mehdi Saadat Novin

    2017-06-01

    Full Text Available Development of urban construction and ever-increasing growth of population lead to landuse changes especially in agricultural lands, which play an important role in providing human food. According to this issue, a proper landuse planning is required to protecting and preserving the valuable agricultural lands and environment, in today’s world. The prediction of urban growth can help in understanding the potential impacts on a region’s water resource, economy and people. One of the effective parameters in development of cities is connective routes network and their different types and qualities that play an important role in decreasing or increasing the growth of the city. On the other hand, the type of the connective routes network is an important factor for the speed and quality of development. In this paper, two different scenarios were used to simulate landuse changes and analyzing their results. In first scenario, modeling is based on the effective parameters in urban growth without classification of connective routes network. In the second scenario, effective parameters in urban growth were considered and connective routes were classified in 6 different classes with different weights in order to examine their effect on urban development. Simulation of landuse has been carried out for 2020–2050. The results clearly showed the effect of the connective routes network classification in output maps so that the effect of the first and second main routes network in development, is conspicuous.

  11. Methodologies for the modeling and simulation of biochemical networks, illustrated for signal transduction pathways: a primer.

    Science.gov (United States)

    ElKalaawy, Nesma; Wassal, Amr

    2015-03-01

    Biochemical networks depict the chemical interactions that take place among elements of living cells. They aim to elucidate how cellular behavior and functional properties of the cell emerge from the relationships between its components, i.e. molecules. Biochemical networks are largely characterized by dynamic behavior, and exhibit high degrees of complexity. Hence, the interest in such networks is growing and they have been the target of several recent modeling efforts. Signal transduction pathways (STPs) constitute a class of biochemical networks that receive, process, and respond to stimuli from the environment, as well as stimuli that are internal to the organism. An STP consists of a chain of intracellular signaling processes that ultimately result in generating different cellular responses. This primer presents the methodologies used for the modeling and simulation of biochemical networks, illustrated for STPs. These methodologies range from qualitative to quantitative, and include structural as well as dynamic analysis techniques. We describe the different methodologies, outline their underlying assumptions, and provide an assessment of their advantages and disadvantages. Moreover, publicly and/or commercially available implementations of these methodologies are listed as appropriate. In particular, this primer aims to provide a clear introduction and comprehensive coverage of biochemical modeling and simulation methodologies for the non-expert, with specific focus on relevant literature of STPs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Simulated apoptosis/neurogenesis regulates learning and memory capabilities of adaptive neural networks.

    Science.gov (United States)

    Chambers, R Andrew; Potenza, Marc N; Hoffman, Ralph E; Miranker, Willard

    2004-04-01

    Characterization of neuronal death and neurogenesis in the adult brain of birds, humans, and other mammals raises the possibility that neuronal turnover represents a special form of neuroplasticity associated with stress responses, cognition, and the pathophysiology and treatment of psychiatric disorders. Multilayer neural network models capable of learning alphabetic character representations via incremental synaptic connection strength changes were used to assess additional learning and memory effects incurred by simulation of coordinated apoptotic and neurogenic events in the middle layer. Using a consistent incremental learning capability across all neurons and experimental conditions, increasing the number of middle layer neurons undergoing turnover increased network learning capacity for new information, and increased forgetting of old information. Simulations also showed that specific patterns of neural turnover based on individual neuronal connection characteristics, or the temporal-spatial pattern of neurons chosen for turnover during new learning impacts new learning performance. These simulations predict that apoptotic and neurogenic events could act together to produce specific learning and memory effects beyond those provided by ongoing mechanisms of connection plasticity in neuronal populations. Regulation of rates as well as patterns of neuronal turnover may serve an important function in tuning the informatic properties of plastic networks according to novel informational demands. Analogous regulation in the hippocampus may provide for adaptive cognitive and emotional responses to novel and stressful contexts, or operate suboptimally as a basis for psychiatric disorders. The implications of these elementary simulations for future biological and neural modeling research on apoptosis and neurogenesis are discussed.

  13. Inverse simulation system for manual-controlled rendezvous and docking based on artificial neural network

    Science.gov (United States)

    Zhou, Wanmeng; Wang, Hua; Tang, Guojin; Guo, Shuai

    2016-09-01

    The time-consuming experimental method for handling qualities assessment cannot meet the increasing fast design requirements for the manned space flight. As a tool for the aircraft handling qualities research, the model-predictive-control structured inverse simulation (MPC-IS) has potential applications in the aerospace field to guide the astronauts' operations and evaluate the handling qualities more effectively. Therefore, this paper establishes MPC-IS for the manual-controlled rendezvous and docking (RVD) and proposes a novel artificial neural network inverse simulation system (ANN-IS) to further decrease the computational cost. The novel system was obtained by replacing the inverse model of MPC-IS with the artificial neural network. The optimal neural network was trained by the genetic Levenberg-Marquardt algorithm, and finally determined by the Levenberg-Marquardt algorithm. In order to validate MPC-IS and ANN-IS, the manual-controlled RVD experiments on the simulator were carried out. The comparisons between simulation results and experimental data demonstrated the validity of two systems and the high computational efficiency of ANN-IS.

  14. Phase-dependent stimulation effects on bursting activity in a neural network cortical simulation.

    Science.gov (United States)

    Anderson, William S; Kudela, Pawel; Weinberg, Seth; Bergey, Gregory K; Franaszczuk, Piotr J

    2009-03-01

    A neural network simulation with realistic cortical architecture has been used to study synchronized bursting as a seizure representation. This model has the property that bursting epochs arise and cease spontaneously, and bursting epochs can be induced by external stimulation. We have used this simulation to study the time-frequency properties of the evolving bursting activity, as well as effects due to network stimulation. The model represents a cortical region of 1.6 mm x 1.6mm, and includes seven neuron classes organized by cortical layer, inhibitory or excitatory properties, and electrophysiological characteristics. There are a total of 65,536 modeled single compartment neurons that operate according to a version of Hodgkin-Huxley dynamics. The intercellular wiring is based on histological studies and our previous modeling efforts. The bursting phase is characterized by a flat frequency spectrum. Stimulation pulses are applied to this modeled network, with an electric field provided by a 1mm radius circular electrode represented mathematically in the simulation. A phase dependence to the post-stimulation quiescence is demonstrated, with local relative maxima in efficacy occurring before or during the network depolarization phase in the underlying activity. Brief periods of network insensitivity to stimulation are also demonstrated. The phase dependence was irregular and did not reach statistical significance when averaged over the full 2.5s of simulated bursting investigated. This result provides comparison with previous in vivo studies which have also demonstrated increased efficacy of stimulation when pulses are applied at the peak of the local field potential during cortical after discharges. The network bursting is synchronous when comparing the different neuron classes represented up to an uncertainty of 10 ms. Studies performed with an excitatory chandelier cell component demonstrated increased synchronous bursting in the model, as predicted from

  15. An Efficient Neural Network Based Modeling Method for Automotive EMC Simulation

    Science.gov (United States)

    Frank, Florian; Weigel, Robert

    2011-09-01

    This paper presents a newly developed methodology for VHDL-AMS model integration into SPICE-based EMC simulations. To this end the VHDL-AMS model, which is available in a compiled version only, is characterized under typical loading conditions, and afterwards a neural network based technique is applied to convert characteristic voltage and current data into an equivalent circuit in SPICE syntax. After the explanation of the whole method and the presentation of a newly developed switched state space dynamic neural network model, the entire analysis process is demonstrated using a typical application from automotive industry.

  16. A Cut Cell Method for Simulating Spatial Models of Biochemical Reaction Networks in Arbitrary Geometries.

    Science.gov (United States)

    Strychalski, Wanda; Adalsteinsson, David; Elston, Timothy C

    2010-01-01

    Cells use signaling networks consisting of multiple interacting proteins to respond to changes in their environment. In many situations, such as chemotaxis, spatial and temporal information must be transmitted through the network. Recent computational studies have emphasized the importance of cellular geometry in signal transduction, but have been limited in their ability to accurately represent complex cell morphologies. We present a finite volume method that addresses this problem. Our method uses Cartesian cut cells and is second order in space and time. We use our method to simulate several models of signaling systems in realistic cell morphologies obtained from live cell images and examine the effects of geometry on signal transduction.

  17. The Living With a Star Space Environment Testbed Program

    Science.gov (United States)

    Barth, Janet; LaBel, Kenneth; Day, John H. (Technical Monitor)

    2001-01-01

    NASA has initiated the Living with a Star (LWS) Program to develop the scientific understanding to address the aspects of the Connected Sun-Earth system that affects life and society. The Program Architecture includes science missions, theory and modeling and Space Environment Testbeds (SET). This current paper discusses the Space Environment Testbeds. The goal of the SET program is to improve the engineering approach to accomodate and/or mitigate the effects of solar variability on spacecraft design and operations. The SET Program will infuse new technologies into the space programs through collection of data in space and subsequent design and validation of technologies. Examples of these technologies are cited and discussed.

  18. Design and Analysis of FKSI Nulling Interferometry Testbed

    Science.gov (United States)

    Wilson, Mark; Crooke, Julie; Howard, Joseph; Martino, Anthony; Danchi, William

    2004-01-01

    The Fourier Kelvin Stellar Interferometer is a space borne mission whose purpose is to validate the existence of previously detected Extra Solar Giant Planets (EGP's) and determine the age and primary atmospheric constituents of these EGP's. It consists of 2 collecting telescopes followed by a Mach Zehnder interferometer working in a wavelength range of 3-8 microns. To support this concept, a testbed is being built at NASA Goddard Space Flight Center to demonstrate feasibility of achieving the required nulling ratio (10e-4) across the waveband. This paper describes the design and performance analysis of the testbed. Considerations such as polarization, pupil overlap, and optical path length control are discussed.

  19. Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks

    Science.gov (United States)

    Samareh, Jamshid A.; Wong, Jay Ming

    2014-01-01

    Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.

  20. DECISION WITH ARTIFICIAL NEURAL NETWORKS IN DISCRETE EVENT SIMULATION MODELS ON A TRAFFIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Marília Gonçalves Dutra da Silva

    2016-04-01

    Full Text Available ABSTRACT This work aims to demonstrate the use of a mechanism to be applied in the development of the discrete-event simulation models that perform decision operations through the implementation of an artificial neural network. Actions that involve complex operations performed by a human agent in a process, for example, are often modeled in simplified form with the usual mechanisms of simulation software. Therefore, it was chosen a traffic system controlled by a traffic officer with a flow of vehicles and pedestrians to demonstrate the proposed solution. From a module built in simulation software itself, it was possible to connect the algorithm for intelligent decision to the simulation model. The results showed that the model elaborated responded as expected when it was submitted to actions, which required different decisions to maintain the operation of the system with changes in the flow of people and vehicles.