WorldWideScience

Sample records for network computing environments

  1. HeNCE: A Heterogeneous Network Computing Environment

    Directory of Open Access Journals (Sweden)

    Adam Beguelin

    1994-01-01

    Full Text Available Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM. The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.

  2. Network Computer Technology. Phase I: Viability and Promise within NASA's Desktop Computing Environment

    Science.gov (United States)

    Paluzzi, Peter; Miller, Rosalind; Kurihara, West; Eskey, Megan

    1998-01-01

    Over the past several months, major industry vendors have made a business case for the network computer as a win-win solution toward lowering total cost of ownership. This report provides results from Phase I of the Ames Research Center network computer evaluation project. It identifies factors to be considered for determining cost of ownership; further, it examines where, when, and how network computer technology might fit in NASA's desktop computing architecture.

  3. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    Science.gov (United States)

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  4. Personal Computer Local Area Network Security in an Academic Environment

    Science.gov (United States)

    1989-12-01

    AND JUSTIFICATION The San Francisco Examiner ran an article by John Dvorak on Sunday August 6th titled "Viruses Make Me Sick". The author speaks of the...humidity or foreign object destruction (e.g. a drink spilled into the keyboard ). Unfortunately, these areas can be tough to guard against. User training is...inserted into a floppy drive or a favorite soft drink is placed two inches from a keyboard . Instead, upon introduc- tion to the network labs

  5. APINetworks: A general API for the treatment of complex networks in arbitrary computational environments

    Science.gov (United States)

    Niño, Alfonso; Muñoz-Caro, Camelia; Reyes, Sebastián

    2015-11-01

    The last decade witnessed a great development of the structural and dynamic study of complex systems described as a network of elements. Therefore, systems can be described as a set of, possibly, heterogeneous entities or agents (the network nodes) interacting in, possibly, different ways (defining the network edges). In this context, it is of practical interest to model and handle not only static and homogeneous networks but also dynamic, heterogeneous ones. Depending on the size and type of the problem, these networks may require different computational approaches involving sequential, parallel or distributed systems with or without the use of disk-based data structures. In this work, we develop an Application Programming Interface (APINetworks) for the modeling and treatment of general networks in arbitrary computational environments. To minimize dependency between components, we decouple the network structure from its function using different packages for grouping sets of related tasks. The structural package, the one in charge of building and handling the network structure, is the core element of the system. In this work, we focus in this API structural component. We apply an object-oriented approach that makes use of inheritance and polymorphism. In this way, we can model static and dynamic networks with heterogeneous elements in the nodes and heterogeneous interactions in the edges. In addition, this approach permits a unified treatment of different computational environments. Tests performed on a C++11 version of the structural package show that, on current standard computers, the system can handle, in main memory, directed and undirected linear networks formed by tens of millions of nodes and edges. Our results compare favorably to those of existing tools.

  6. Sign: large-scale gene network estimation environment for high performance computing.

    Science.gov (United States)

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  7. Network-based Parallel Retrieval Onboard Computing Environment for Sensor Systems Deployed on NASA Unmanned Aircraft Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Remote Sensing Solutions proposes to develop the Network-based Parallel Retrieval Onboard Computing Environment for Sensor Systems (nPROCESS) for deployment on...

  8. computer networks

    Directory of Open Access Journals (Sweden)

    N. U. Ahmed

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  9. Computational design of genomic transcriptional networks with adaptation to varying environments

    Science.gov (United States)

    Carrera, Javier; Elena, Santiago F.; Jaramillo, Alfonso

    2012-01-01

    Transcriptional profiling has been widely used as a tool for unveiling the coregulations of genes in response to genetic and environmental perturbations. These coregulations have been used, in a few instances, to infer global transcriptional regulatory models. Here, using the large amount of transcriptomic information available for the bacterium Escherichia coli, we seek to understand the design principles determining the regulation of its transcriptome. Combining transcriptomic and signaling data, we develop an evolutionary computational procedure that allows obtaining alternative genomic transcriptional regulatory network (GTRN) that still maintains its adaptability to dynamic environments. We apply our methodology to an E. coli GTRN and show that it could be rewired to simpler transcriptional regulatory structures. These rewired GTRNs still maintain the global physiological response to fluctuating environments. Rewired GTRNs contain 73% fewer regulated operons. Genes with similar functions and coordinated patterns of expression across environments are clustered into longer regulated operons. These synthetic GTRNs are more sensitive and show a more robust response to challenging environments. This result illustrates that the natural configuration of E. coli GTRN does not necessarily result from selection for robustness to environmental perturbations, but that evolutionary contingencies may have been important as well. We also discuss the limitations of our methodology in the context of the demand theory. Our procedure will be useful as a novel way to analyze global transcription regulation networks and in synthetic biology for the de novo design of genomes. PMID:22927389

  10. Hyperswitch Communication Network Computer

    Science.gov (United States)

    Peterson, John C.; Chow, Edward T.; Priel, Moshe; Upchurch, Edwin T.

    1993-01-01

    Hyperswitch Communications Network (HCN) computer is prototype multiple-processor computer being developed. Incorporates improved version of hyperswitch communication network described in "Hyperswitch Network For Hypercube Computer" (NPO-16905). Designed to support high-level software and expansion of itself. HCN computer is message-passing, multiple-instruction/multiple-data computer offering significant advantages over older single-processor and bus-based multiple-processor computers, with respect to price/performance ratio, reliability, availability, and manufacturing. Design of HCN operating-system software provides flexible computing environment accommodating both parallel and distributed processing. Also achieves balance among following competing factors; performance in processing and communications, ease of use, and tolerance of (and recovery from) faults.

  11. On the relevance of efficient, integrated computer and network monitoring in HEP distributed online environment

    CERN Document Server

    Carvalho, D F; Delgado, V; Albert, J N; Bellas, N; Javello, J; Miere, Y; Ruffinoni, D; Smith, G

    1996-01-01

    Large Scientific Equipments are controlled by Computer System whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, thhe sophistication of its trearment and, on the over hand by the fast evolution of the computer and network market. Some people call them generically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this frame- work the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is to integrate the various functions of DCCS monitoring into one general purpose Multi-layer ...

  12. Proposed Network Intrusion Detection System ‎Based on Fuzzy c Mean Algorithm in Cloud ‎Computing Environment

    Directory of Open Access Journals (Sweden)

    Shawq Malik Mehibs

    2017-12-01

    Full Text Available Nowadays cloud computing had become is an integral part of IT industry, cloud computing provides Working environment allow a user of environmental to share data and resources over the internet. Where cloud computing its virtual grouping of resources offered over the internet, this lead to different matters related to the security and privacy in cloud computing. And therefore, create intrusion detection very important to detect outsider and insider intruders of cloud computing with high detection rate and low false positive alarm in the cloud environment. This work proposed network intrusion detection module using fuzzy c mean algorithm. The kdd99 dataset used for experiments .the proposed system characterized by a high detection rate with low false positive alarm

  13. Computing environment logbook

    Science.gov (United States)

    Osbourn, Gordon C; Bouchard, Ann M

    2012-09-18

    A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

  14. Computer networks monitoring

    OpenAIRE

    Antončič , Polona

    2012-01-01

    The present thesis entitled Computer Networks Monitoring introduces the basics of computer networks, the aim and the computer data reclamation from networking devices, software for the system follow-up together with the case of monitoring a real network with tens of network devices. The networks represent an important part in the modern information technology and serve for the exchange of data and sources which makes their impeccability of crucial importance. Correct and efficient sys...

  15. Introduction to computer networking

    CERN Document Server

    Robertazzi, Thomas G

    2017-01-01

    This book gives a broad look at both fundamental networking technology and new areas that support it and use it. It is a concise introduction to the most prominent, recent technological topics in computer networking. Topics include network technology such as wired and wireless networks, enabling technologies such as data centers, software defined networking, cloud and grid computing and applications such as networks on chips, space networking and network security. The accessible writing style and non-mathematical treatment makes this a useful book for the student, network and communications engineer, computer scientist and IT professional. • Features a concise, accessible treatment of computer networking, focusing on new technological topics; • Provides non-mathematical introduction to networks in their most common forms today;< • Includes new developments in switching, optical networks, WiFi, Bluetooth, LTE, 5G, and quantum cryptography.

  16. Printing in Ubiquitous Computing Environments

    NARCIS (Netherlands)

    Karapantelakis, Athanasios; Delvic, Alisa; Zarifi Eslami, Mohammed; Khamit, Saltanat

    Document printing has long been considered an indispensable part of the workspace. While this process is considered trivial and simple for environments where resources are ample (e.g. desktop computers connected to printers within a corporate network), it becomes complicated when applied in a mobile

  17. Basics of Computer Networking

    CERN Document Server

    Robertazzi, Thomas

    2012-01-01

    Springer Brief Basics of Computer Networking provides a non-mathematical introduction to the world of networks. This book covers both technology for wired and wireless networks. Coverage includes transmission media, local area networks, wide area networks, and network security. Written in a very accessible style for the interested layman by the author of a widely used textbook with many years of experience explaining concepts to the beginner.

  18. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  19. Computer-communication networks

    CERN Document Server

    Meditch, James S

    1983-01-01

    Computer- Communication Networks presents a collection of articles the focus of which is on the field of modeling, analysis, design, and performance optimization. It discusses the problem of modeling the performance of local area networks under file transfer. It addresses the design of multi-hop, mobile-user radio networks. Some of the topics covered in the book are the distributed packet switching queuing network design, some investigations on communication switching techniques in computer networks and the minimum hop flow assignment and routing subject to an average message delay constraint

  20. Data Logistics in Network Computing

    CERN Multimedia

    CERN. Geneva; Marquina, Miguel Angel

    2005-01-01

    In distributed computing environments, performance is often dominated by the time that it takes to move data over a network. In the case of data-centric applications, or Data Grids, this problem of data movement becomes one of the overriding concerns. This talk describes techniques for improving data movement in Grid environments that we refer to as 'logistics.' We demonstrate that by using storage and cooperative forwarding 'in' the network, we can improve end to end throughput in many cases. Our approach offers clear performance benefits for high-bandwidth, high-latency networks. This talk will introduce the Logistical Session Layer (LSL) and provide experimental results from that system.

  1. A computational model for path loss in wireless sensor networks in orchard environments.

    Science.gov (United States)

    Anastassiu, Hristos T; Vougioukas, Stavros; Fronimos, Theodoros; Regen, Christian; Petrou, Loukas; Zude, Manuela; Käthner, Jana

    2014-03-12

    A computational model for radio wave propagation through tree orchards is presented. Trees are modeled as collections of branches, geometrically approximated by cylinders, whose dimensions are determined on the basis of measurements in a cherry orchard. Tree canopies are modeled as dielectric spheres of appropriate size. A single row of trees was modeled by creating copies of a representative tree model positioned on top of a rectangular, lossy dielectric slab that simulated the ground. The complete scattering model, including soil and trees, enhanced by periodicity conditions corresponding to the array, was characterized via a commercial computational software tool for simulating the wave propagation by means of the Finite Element Method. The attenuation of the simulated signal was compared to measurements taken in the cherry orchard, using two ZigBee receiver-transmitter modules. Near the top of the tree canopies (at 3 m), the predicted attenuation was close to the measured one-just slightly underestimated. However, at 1.5 m the solver underestimated the measured attenuation significantly, especially when leaves were present and, as distances grew longer. This suggests that the effects of scattering from neighboring tree rows need to be incorporated into the model. However, complex geometries result in ill conditioned linear systems that affect the solver's convergence.

  2. Optimal monitoring of computer networks

    Energy Technology Data Exchange (ETDEWEB)

    Fedorov, V.V.; Flanagan, D.

    1997-08-01

    The authors apply the ideas from optimal design theory to the very specific area of monitoring large computer networks. The behavior of these networks is so complex and uncertain that it is quite natural to use the statistical methods of experimental design which were originated in such areas as biology, behavioral sciences and agriculture, where the random character of phenomena is a crucial component and systems are too complicated to be described by some sophisticated deterministic models. They want to emphasize that only the first steps have been completed, and relatively simple underlying concepts about network functions have been used. Their immediate goal is to initiate studies focused on developing efficient experimental design techniques which can be used by practitioners working with large networks operating and evolving in a random environment.

  3. Computer Networks and Globalization

    Directory of Open Access Journals (Sweden)

    J. Magliaro

    2007-07-01

    Full Text Available Communication and information computer networks connect the world in ways that make globalization more natural and inequity more subtle. As educators, we look at these phenomena holistically analyzing them from the realist’s view, thus exploring tensions, (in equity and (injustice, and from the idealist’s view, thus embracing connectivity, convergence and development of a collective consciousness. In an increasingly market- driven world we find examples of openness and human generosity that are based on networks, specifically the Internet. After addressing open movements in publishing, software industry and education, we describe the possibility of a dialectic equilibrium between globalization and indigenousness in view of ecologically designed future smart networks

  4. Social Networks and the Environment

    OpenAIRE

    Julio Videras

    2013-01-01

    This review discusses empirical research on social networks and the environment; it summarizes findings from representative studies and the conceptual frameworks social scientists use to examine the role of social networks. The article presents basic concepts in social network analysis, summarizes common challenges of empirical research on social networks, and outlines areas for future research. Finally, the article discusses the normative and positive meanings of social networks.

  5. Networking for the Environment

    DEFF Research Database (Denmark)

    Dickel, Petra; Hörisch, Jacob; Ritter, Thomas

    2018-01-01

    from 248 technology-based start-ups shows that those firms with a strong external environmental orientation have significantly higher networking frequencies and build larger networks. Conversely, a strong internal environmental orientation is linked to smaller networks. Thus, the results highlight......Although the public debate on the environmental orientation of firms has intensified, there is a lack of understanding about the consequences of that orientation, especially in terms of its impact on firms' networking behavior. In order to fill this gap, this paper analyzes the impact of external...... and internal environmental orientation on start-ups’ network characteristics, because networks are both vital for the success of start-ups and resource demanding. More specifically, the effects of environmental orientation on networking frequency and network size among start-ups are analyzed. Empirical data...

  6. Scientific Visualization in High Speed Network Environments

    Science.gov (United States)

    Vaziri, Arsi; Kutler, Paul (Technical Monitor)

    1997-01-01

    In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.

  7. NASA's unique networking environment

    Science.gov (United States)

    Johnson, Marjory J.

    1988-01-01

    Networking is an infrastructure technology; it is a tool for NASA to support its space and aeronautics missions. Some of NASA's networking problems are shared by the commercial and/or military communities, and can be solved by working with these communities. However, some of NASA's networking problems are unique and will not be addressed by these other communities. Individual characteristics of NASA's space-mission networking enviroment are examined, the combination of all these characteristics that distinguish NASA's networking systems from either commercial or military systems is explained, and some research areas that are important for NASA to pursue are outlined.

  8. Computing networks from cluster to cloud computing

    CERN Document Server

    Vicat-Blanc, Pascale; Guillier, Romaric; Soudan, Sebastien

    2013-01-01

    "Computing Networks" explores the core of the new distributed computing infrastructures we are using today:  the networking systems of clusters, grids and clouds. It helps network designers and distributed-application developers and users to better understand the technologies, specificities, constraints and benefits of these different infrastructures' communication systems. Cloud Computing will give the possibility for millions of users to process data anytime, anywhere, while being eco-friendly. In order to deliver this emerging traffic in a timely, cost-efficient, energy-efficient, and

  9. Computational network design from functional specifications

    KAUST Repository

    Peng, Chi Han

    2016-07-11

    Connectivity and layout of underlying networks largely determine agent behavior and usage in many environments. For example, transportation networks determine the flow of traffic in a neighborhood, whereas building floorplans determine the flow of people in a workspace. Designing such networks from scratch is challenging as even local network changes can have large global effects. We investigate how to computationally create networks starting from only high-level functional specifications. Such specifications can be in the form of network density, travel time versus network length, traffic type, destination location, etc. We propose an integer programming-based approach that guarantees that the resultant networks are valid by fulfilling all the specified hard constraints and that they score favorably in terms of the objective function. We evaluate our algorithm in two different design settings, street layout and floorplans to demonstrate that diverse networks can emerge purely from high-level functional specifications.

  10. Environment Aware Cellular Networks

    KAUST Repository

    Ghazzai, Hakim

    2015-02-01

    The unprecedented rise of mobile user demand over the years have led to an enormous growth of the energy consumption of wireless networks as well as the greenhouse gas emissions which are estimated currently to be around 70 million tons per year. This significant growth of energy consumption impels network companies to pay huge bills which represent around half of their operating expenditures. Therefore, many service providers, including mobile operators, are looking for new and modern green solutions to help reduce their expenses as well as the level of their CO2 emissions. Base stations are the most power greedy element in cellular networks: they drain around 80% of the total network energy consumption even during low traffic periods. Thus, there is a growing need to develop more energy-efficient techniques to enhance the green performance of future 4G/5G cellular networks. Due to the problem of traffic load fluctuations in cellular networks during different periods of the day and between different areas (shopping or business districts and residential areas), the base station sleeping strategy has been one of the main popular research topics in green communications. In this presentation, we present several practical green techniques that provide significant gains for mobile operators. Indeed, combined with the base station sleeping strategy, these techniques achieve not only a minimization of the fossil fuel consumption but also an enhancement of mobile operator profits. We start with an optimized cell planning method that considers varying spatial and temporal user densities. We then use the optimal transport theory in order to define the cell boundaries such that the network total transmit power is reduced. Afterwards, we exploit the features of the modern electrical grid, the smart grid, as a new tool of power management for cellular networks and we optimize the energy procurement from multiple energy retailers characterized by different prices and pollutant

  11. Computing and networking at JINR

    CERN Document Server

    Zaikin, N S; Strizh, T A

    2001-01-01

    This paper describes the computing and networking facilities at the Joint Institute for Nuclear Research. The Joint Institute for Nuclear Research (JINR) is an international intergovernmental organization located in Dubna, a small town on the bank of the Volga river 120 km north from Moscow. At present JINR has 18 Member States. The Institute consists of 7 scientific Laboratories and some subdivisions. JINR has scientific cooperation with such scientific centres as CERN, FNAL, DESY etc. and is equipped with the powerful and fast computation means integrated into the worldwide computer networks. The Laboratory of Information Technologies (LIT) is responsible for Computing and Networking at JINR. (5 refs).

  12. Administration of remote computer networks

    OpenAIRE

    Fjeldbo, Stig Jarle

    2005-01-01

    Master i nettverks- og systemadministrasjon Today's computer networks have gone from typically being a small local area network, to wide area networks, where users and servers are interconnected with each other from all over the world. This development has gradually expanded as bandwidth has become higher and cheaper. But when dealing with the network traffic, bandwidth is only one of the important properties. Delay, jitter and reliability are also important properties for t...

  13. Understanding and designing computer networks

    CERN Document Server

    King, Graham

    1995-01-01

    Understanding and Designing Computer Networks considers the ubiquitous nature of data networks, with particular reference to internetworking and the efficient management of all aspects of networked integrated data systems. In addition it looks at the next phase of networking developments; efficiency and security are covered in the sections dealing with data compression and data encryption; and future examples of network operations, such as network parallelism, are introduced.A comprehensive case study is used throughout the text to apply and illustrate new techniques and concepts as th

  14. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  15. RNEDE: Resilient Network Design Environment

    Energy Technology Data Exchange (ETDEWEB)

    Venkat Venkatasubramanian, Tanu Malik, Arun Giridh; Craig Rieger; Keith Daum; Miles McQueen

    2010-08-01

    Modern living is more and more dependent on the intricate web of critical infrastructure systems. The failure or damage of such systems can cause huge disruptions. Traditional design of this web of critical infrastructure systems was based on the principles of functionality and reliability. However, it is increasingly being realized that such design objectives are not sufficient. Threats, disruptions and faults often compromise the network, taking away the benefits of an efficient and reliable design. Thus, traditional network design parameters must be combined with self-healing mechanisms to obtain a resilient design of the network. In this paper, we present RNEDEa resilient network design environment that that not only optimizes the network for performance but tolerates fluctuations in its structure that result from external threats and disruptions. The environment evaluates a set of remedial actions to bring a compromised network to an optimal level of functionality. The environment includes a visualizer that enables the network administrator to be aware of the current state of the network and the suggested remedial actions at all times.

  16. On computer vision in wireless sensor networks.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Ko, Teresa H.

    2004-09-01

    Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an image capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.

  17. Risks in Networked Computer Systems

    OpenAIRE

    Klingsheim, André N.

    2008-01-01

    Networked computer systems yield great value to businesses and governments, but also create risks. The eight papers in this thesis highlight vulnerabilities in computer systems that lead to security and privacy risks. A broad range of systems is discussed in this thesis: Norwegian online banking systems, the Norwegian Automated Teller Machine (ATM) system during the 90's, mobile phones, web applications, and wireless networks. One paper also comments on legal risks to bank cust...

  18. Wireless Computational Networking Architectures

    Science.gov (United States)

    2013-12-01

    2] T. Ho, M. Medard, R. Kotter , D. Karger, M. Effros, J. Shi, and B. Leong, “A Random Linear Network Coding Approach to Multicast,” IEEE...218, January 2008. [10] R. Kotter and F. R. Kschischang, “Coding for Errors and Erasures in Random Network Coding,” IEEE Transactions on...Systems, Johns Hopkins University, Baltimore, Maryland, 2011. 6. B. W. Suter and Z. Yan U.S. Patent Pending 13/949,319 Rank Deficient Decoding

  19. Computing with Spiking Neuron Networks

    NARCIS (Netherlands)

    H. Paugam-Moisy; S.M. Bohte (Sander); G. Rozenberg; T.H.W. Baeck (Thomas); J.N. Kok (Joost)

    2012-01-01

    htmlabstractAbstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions

  20. CSNS computing environment Based on OpenStack

    Science.gov (United States)

    Li, Yakang; Qi, Fazhi; Chen, Gang; Wang, Yanming; Hong, Jianshu

    2017-10-01

    Cloud computing can allow for more flexible configuration of IT resources and optimized hardware utilization, it also can provide computing service according to the real need. We are applying this computing mode to the China Spallation Neutron Source(CSNS) computing environment. So, firstly, CSNS experiment and its computing scenarios and requirements are introduced in this paper. Secondly, the design and practice of cloud computing platform based on OpenStack are mainly demonstrated from the aspects of cloud computing system framework, network, storage and so on. Thirdly, some improvments to openstack we made are discussed further. Finally, current status of CSNS cloud computing environment are summarized in the ending of this paper.

  1. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Device Status Data

    Science.gov (United States)

    2015-09-01

    Configuration 3 4.1 Status Codes 4 4.2 Request Time 5 4.3 Hydra BLOb Metadata 6 5. Data Processing 6 5.1 Hydra Data Processing Framework 6 5.1.1...statements on any database table or select the entire table at a user-specified interval. Hydra records its data in binary large object ( BLOb ...files.5 The data are organized into cuts inside the BLOb files. Typically each cut represents one poll on the network. 4. Hydra Configuration Hydra

  2. Computational Social Network Analysis

    CERN Document Server

    Hassanien, Aboul-Ella

    2010-01-01

    Presents insight into the social behaviour of animals (including the study of animal tracks and learning by members of the same species). This book provides web-based evidence of social interaction, perceptual learning, information granulation and the behaviour of humans and affinities between web-based social networks

  3. Analysis of computer networks

    CERN Document Server

    Gebali, Fayez

    2015-01-01

    This textbook presents the mathematical theory and techniques necessary for analyzing and modeling high-performance global networks, such as the Internet. The three main building blocks of high-performance networks are links, switching equipment connecting the links together, and software employed at the end nodes and intermediate switches. This book provides the basic techniques for modeling and analyzing these last two components. Topics covered include, but are not limited to: Markov chains and queuing analysis, traffic modeling, interconnection networks and switch architectures and buffering strategies.   ·         Provides techniques for modeling and analysis of network software and switching equipment; ·         Discusses design options used to build efficient switching equipment; ·         Includes many worked examples of the application of discrete-time Markov chains to communication systems; ·         Covers the mathematical theory and techniques necessary for ana...

  4. Computer Network Security- The Challenges of Securing a Computer Network

    Science.gov (United States)

    Scotti, Vincent, Jr.

    2011-01-01

    This article is intended to give the reader an overall perspective on what it takes to design, implement, enforce and secure a computer network in the federal and corporate world to insure the confidentiality, integrity and availability of information. While we will be giving you an overview of network design and security, this article will concentrate on the technology and human factors of securing a network and the challenges faced by those doing so. It will cover the large number of policies and the limits of technology and physical efforts to enforce such policies.

  5. Collective network for computer structures

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Chen, Dong [Croton On Hudson, NY; Gara, Alan [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Takken, Todd E [Brewster, NY; Steinmacher-Burow, Burkhard D [Wernau, DE; Vranas, Pavlos M [Bedford Hills, NY

    2011-08-16

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

  6. Airborne Cloud Computing Environment (ACCE)

    Science.gov (United States)

    Hardman, Sean; Freeborn, Dana; Crichton, Dan; Law, Emily; Kay-Im, Liz

    2011-01-01

    Airborne Cloud Computing Environment (ACCE) is JPL's internal investment to improve the return on airborne missions. Improve development performance of the data system. Improve return on the captured science data. The investment is to develop a common science data system capability for airborne instruments that encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation.

  7. Dynamic Optical Networks for Future Internet Environments

    Science.gov (United States)

    Matera, Francesco

    2014-05-01

    This article reports an overview on the evolution of the optical network scenario taking into account the exponential growth of connected devices, big data, and cloud computing that is driving a concrete transformation impacting the information and communication technology world. This hyper-connected scenario is deeply affecting relationships between individuals, enterprises, citizens, and public administrations, fostering innovative use cases in practically any environment and market, and introducing new opportunities and new challenges. The successful realization of this hyper-connected scenario depends on different elements of the ecosystem. In particular, it builds on connectivity and functionalities allowed by converged next-generation networks and their capacity to support and integrate with the Internet of Things, machine-to-machine, and cloud computing. This article aims at providing some hints of this scenario to contribute to analyze impacts on optical system and network issues and requirements. In particular, the role of the software-defined network is investigated by taking into account all scenarios regarding data centers, cloud computing, and machine-to-machine and trying to illustrate all the advantages that could be introduced by advanced optical communications.

  8. Environments for online maritime simulators with cloud computing capabilities

    Science.gov (United States)

    Raicu, Gabriel; Raicu, Alexandra

    2016-12-01

    This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.

  9. Markov Networks in Evolutionary Computation

    CERN Document Server

    Shakya, Siddhartha

    2012-01-01

    Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs).  EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...

  10. A Multilayer Model of Computer Networks

    OpenAIRE

    Shchurov, Andrey A.

    2015-01-01

    The fundamental concept of applying the system methodology to network analysis declares that network architecture should take into account services and applications which this network provides and supports. This work introduces a formal model of computer networks on the basis of the hierarchical multilayer networks. In turn, individual layers are represented as multiplex networks. The concept of layered networks provides conditions of top-down consistency of the model. Next, we determined the...

  11. System administration of ATLAS TDAQ computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Adeel-Ur-Rehman, A [National Centre for Physics, Islamabad (Pakistan); Bujor, F; Dumitrescu, A; Dumitru, I; Leahu, M; Valsan, L [Politehnica University of Bucharest (Romania); Benes, J [Zapadoceska Univerzita v Plzni (Czech Republic); Caramarcu, C [National Institute of Physics and Nuclear Engineering (Romania); Dobson, M; Unel, G [University of California at Irvine (United States); Oreshkin, A [St. Petersburg Nuclear Physics Institute (Russian Federation); Popov, D [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany); Zaytsev, A, E-mail: Alexandr.Zaytsev@cern.c [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation)

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  12. Personal computer local networks report

    CERN Document Server

    1991-01-01

    Please note this is a Short Discount publication. Since the first microcomputer local networks of the late 1970's and early 80's, personal computer LANs have expanded in popularity, especially since the introduction of IBMs first PC in 1981. The late 1980s has seen a maturing in the industry with only a few vendors maintaining a large share of the market. This report is intended to give the reader a thorough understanding of the technology used to build these systems ... from cable to chips ... to ... protocols to servers. The report also fully defines PC LANs and the marketplace, with in-

  13. Terminal-oriented computer-communication networks.

    Science.gov (United States)

    Schwartz, M.; Boorstyn, R. R.; Pickholtz, R. L.

    1972-01-01

    Four examples of currently operating computer-communication networks are described in this tutorial paper. They include the TYMNET network, the GE Information Services network, the NASDAQ over-the-counter stock-quotation system, and the Computer Sciences Infonet. These networks all use programmable concentrators for combining a multiplicity of terminals. Included in the discussion for each network is a description of the overall network structure, the handling and transmission of messages, communication requirements, routing and reliability consideration where applicable, operating data and design specifications where available, and unique design features in the area of computer communications.

  14. Computer network defense through radial wave functions

    Science.gov (United States)

    Malloy, Ian J.

    The purpose of this research is to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has been devastating to geopolitical regions in that they are severely difficult for a civilian to avoid triggering given the unknown position of a landmine. Thus, the importance of understanding a logic bomb is relevant and has corollaries to quantum mechanics as well. The research synthesizes quantum logic phase shifts in certain respects using the Dynamic Data Exchange protocol in software written for this work, as well as a C-NOT gate applied to a virtual quantum circuit environment by implementing a Quantum Fourier Transform. The research focus applies the principles of coherence and entanglement from quantum physics, the concept of expert systems in artificial intelligence, principles of prime number based cryptography with trapdoor functions, and modeling radio wave propagation against an event from unknown parameters. This comes as a program relying on the artificial intelligence concept of an expert system in conjunction with trigger events for a trapdoor function relying on infinite recursion, as well as system mechanics for elliptic curve cryptography along orbital angular momenta. Here trapdoor both denotes the form of cipher, as well as the implied relationship to logic bombs.

  15. Conducting network penetration and espionage in a global environment

    CERN Document Server

    Middleton, Bruce

    2014-01-01

    When it's all said and done, penetration testing remains the most effective way to identify security vulnerabilities in computer networks. Conducting Network Penetration and Espionage in a Global Environment provides detailed guidance on how to perform effective penetration testing of computer networks-using free, open source, and commercially available tools, including Backtrack, Metasploit, Wireshark, Nmap, Netcat, and Nessus. It also considers exploits and other programs using Python, PERL, BASH, PHP, Ruby, and Windows PowerShell.The book taps into Bruce Middleton's decades of experience wi

  16. Computation Environments (2) Persistently Evolutionary Semantics

    OpenAIRE

    Ramezanian, Rasoul

    2012-01-01

    In the manuscript titled "Computation environment (1)", we introduced a notion called computation environment as an interactive model for computation and complexity theory. In this model, Turing machines are not autonomous entities and find their meanings through the interaction between a computist and a universal processor, and thus due to evolution of the universal processor, the meanings of Turing machines could change. In this manuscript, we discuss persistently evolutionary intensions. W...

  17. A survey of computational steering environments

    NARCIS (Netherlands)

    J.D. Mulder (Jurriaan); J.J. van Wijk (Jack); R. van Liere (Robert)

    1998-01-01

    textabstractComputational steering is a powerful concept that allows scientists to interactively control a computational process during its execution. In this paper, a survey of computational steering environments for the on-line steering of ongoing scientific and engineering simulations is

  18. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  19. A DECENTRALIZED DYNAMIC LOAD BALANCING FOR COMPUTATIONAL GRID ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    R. Chellamani

    2013-07-01

    Full Text Available With the rapid development of high-speed wide-area networks and powerful yet low-cost computational resources, grid computing has emerged as an attractive computing paradigm. The computational grid is a new parallel and distributed computing paradigm that provides resources for large scientific computing applications. The main techniques that are most suitable to cope with the dynamic nature of the grid are the effective utilization of grid resources and the distribution of application load among multiple resources in a grid environment. This paper addresses the problem of scheduling and load balancing in a grid environment. A Decentralized Dynamic load balancing algorithm is proposed which combines the strong points of neighbor based and cluster based load balancing techniques. This algorithm estimates system parameters such as resource processing capacity, load on each resource and transfer delay for scheduling and load balancing. A set of simulation experiments show that the proposed algorithm provides significant performance over existing ones.

  20. Collaborative virtual reality environments for computational science and design.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M. E.

    1998-02-17

    The authors are developing a networked, multi-user, virtual-reality-based collaborative environment coupled to one or more petaFLOPs computers, enabling the interactive simulation of 10{sup 9} atom systems. The purpose of this work is to explore the requirements for this coupling. Through the design, development, and testing of such systems, they hope to gain knowledge that allows computational scientists to discover and analyze their results more quickly and in a more intuitive manner.

  1. Computer Network Defense Through Radial Wave Functions

    OpenAIRE

    Malloy, Ian

    2016-01-01

    The purpose of this research was to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has be...

  2. Computer network and knowledge sharing. Computer network to chishiki kyoyu

    Energy Technology Data Exchange (ETDEWEB)

    Yoshimura, S. (The University of Tokyo, Tokyo (Japan))

    1991-10-20

    The infomation system has changed from the on-line data base as a simple knowledge sharing, used in the times when devices were expensive, to dialogue type approaches as a result of TSS advancement. This paper describes the advantages in and methods of utilizing personal computer communications from the standpoint of a person engaged in chemistry education. The electronic mail has a number of advatages; you can reach a person as immediately as in the telephone but need not to interrupt the receiver primes work, you can get to it more easily than writing a letter. Particularly the electronic signboard has a large living know-how effect that ''someone who happens to know it can answer''. The Japan Chemical Society has opened the ''Square of Chemistry'' in the NIFTY Serve. Although the Society provides information, it is important that the participants make proposals positively and provide topics. Such a network is expanding to a woridwide scale.

  3. Mobile Agents in Networking and Distributed Computing

    CERN Document Server

    Cao, Jiannong

    2012-01-01

    The book focuses on mobile agents, which are computer programs that can autonomously migrate between network sites. This text introduces the concepts and principles of mobile agents, provides an overview of mobile agent technology, and focuses on applications in networking and distributed computing.

  4. Automated classification of computer network attacks

    CSIR Research Space (South Africa)

    Van Heerden, R

    2013-11-01

    Full Text Available In this paper we demonstrate how an automated reasoner, HermiT, is used to classify instances of computer network based attacks in conjunction with a network attack ontology. The ontology describes different types of network attacks through classes...

  5. Embedding Moodle into Ubiquitous Computing Environments

    NARCIS (Netherlands)

    Glahn, Christian; Specht, Marcus

    2010-01-01

    Glahn, C., & Specht, M. (2010). Embedding Moodle into Ubiquitous Computing Environments. In M. Montebello, et al. (Eds.), 9th World Conference on Mobile and Contextual Learning (MLearn2010) (pp. 100-107). October, 19-22, 2010, Valletta, Malta.

  6. Conditions for Productive Learning in Network Learning Environments

    DEFF Research Database (Denmark)

    Ponti, M.; Dirckinck-Holmfeld, Lone; Lindström, B.

    2004-01-01

    The Kaleidoscope1 Jointly Executed Integrating Research Project (JEIRP) on Conditions for Productive Networked Learning Environments is developing and elaborating conceptual understandings of Computer Supported Collaborative Learning (CSCL) emphasizing the use of cross-cultural comparative......: Pedagogical design and the dialectics of the digital artefacts, the concept of collaboration, ethics/trust, identity and the role of scaffolding of networked learning environments.   The JEIRP is motivated by the fact that many networked learning environments in various European educational settings...... are designed without a deep understanding of the pedagogical, communicative and collaborative conditions embedded in networked learning. Despite the existence of good theoretical views pointing to a social understanding of learning, rather than a traditional individualistic and information processing approach...

  7. Intelligent computing for sustainable energy and environment

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang [Queen' s Univ. Belfast (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Li, Shaoyuan; Li, Dewei [Shanghai Jiao Tong Univ., Shanghai (China). Dept. of Automation; Niu, Qun (eds.) [Shanghai Univ. (China). School of Mechatronic Engineering and Automation

    2013-07-01

    Fast track conference proceedings. State of the art research. Up to date results. This book constitutes the refereed proceedings of the Second International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2012, held in Shanghai, China, in September 2012. The 60 full papers presented were carefully reviewed and selected from numerous submissions and present theories and methodologies as well as the emerging applications of intelligent computing in sustainable energy and environment.

  8. Integrating network awareness in ATLAS distributed computing

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Maeno, T; Mckee, S; Nilsson, P; Petrosyan, A; Vukotic, I; Wenaus, T

    2014-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networks hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networking and data flow performance further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management.

  9. Building a Network Based Laboratory Environment

    Directory of Open Access Journals (Sweden)

    Sea Shuan Luo

    2009-12-01

    Full Text Available This paper presents a comparative study about the development of a network based laboratory environment in the “Unix introduction” course for the undergraduate students. The study results and the response from the students from 2005 to 2006 will be used to better understand what kind of method is more suitable for students. We also use the data collected to adjust our teaching strategy and try to build up a network based laboratory environment.

  10. Digital Immersive Virtual Environments and Instructional Computing

    Science.gov (United States)

    Blascovich, Jim; Beall, Andrew C.

    2010-01-01

    This article reviews theory and research relevant to the development of digital immersive virtual environment-based instructional computing systems. The review is organized within the context of a multidimensional model of social influence and interaction within virtual environments that models the interaction of four theoretical factors: theory…

  11. Network Management of the SPLICE Computer Network.

    Science.gov (United States)

    1982-12-01

    and the Lawrence Livermore Nttionl Laboratory Octopus lietwork [Ref. 24]. Additionally, the :oiex Distributed Network Coatrol Systems 200 and 330...Alexander A., litftqiifivl 93 24. University of Calif~cnia Lavr i ce LJ~vermoce Laboratory Letter Wloe Requa): to -aptN -1raq. Ope, maya & Postgraduaate

  12. Massivizing Networked Virtual Environments on Clouds

    NARCIS (Netherlands)

    Shen, S.

    2015-01-01

    Networked Virtual Environments (NVEs) are virtual environments where physically distributed, Internet-connected users can interact and socialize with others. The most popular NVEs are online games, which have hundreds of millions of users and a global market of tens of billions Euros per year.

  13. Reliable Interconnection Networks for Parallel Computers

    Science.gov (United States)

    1991-10-01

    AD-A259 498111IIIIIIII il1111 1 111 1 1 1 il i Technical Report 1294 R l a leliable Interconnection Networks for Parallel Computers ELECTE I S .JAN...SUBTITLE S. FUNDING NUMBERS Reliable Interconnection Networks for Parallel Computers N00014-80-C-0622 N00014-85-K-0124 N00014-91-J-1698 6. AUTHOR(S) Larry...are presented. 14. SUBJECT TERMS (key words) IS. NUMBER OF PAGES networks fault tolerance parallel computers 78 reliable routors 16. PRICE CODE

  14. Parallel computing and networking; Heiretsu keisanki to network

    Energy Technology Data Exchange (ETDEWEB)

    Asakawa, E.; Tsuru, T. [Japan National Oil Corp., Tokyo (Japan); Matsuoka, T. [Japan Petroleum Exploration Co. Ltd., Tokyo (Japan)

    1996-05-01

    This paper describes the trend of parallel computers used in geophysical exploration. Around 1993 was the early days when the parallel computers began to be used for geophysical exploration. Classification of these computers those days was mainly MIMD (multiple instruction stream, multiple data stream), SIMD (single instruction stream, multiple data stream) and the like. Parallel computers were publicized in the 1994 meeting of the Geophysical Exploration Society as a `high precision imaging technology`. Concerning the library of parallel computers, there was a shift to PVM (parallel virtual machine) in 1993 and to MPI (message passing interface) in 1995. In addition, the compiler of FORTRAN90 was released with support implemented for data parallel and vector computers. In 1993, networks used were Ethernet, FDDI, CDDI and HIPPI. In 1995, the OC-3 products under ATM began to propagate. However, ATM remains to be an interoffice high speed network because the ATM service has not spread yet for the public network. 1 ref.

  15. Portability and networked learning environments

    NARCIS (Netherlands)

    Collis, Betty; de Diana, I.P.F.

    1994-01-01

    Abstract The portability of educational software is defined as the likelihood of software usage, with or without adaptation, in an educational environment different from that for which it was originally designed and produced. Barriers and research relevant to the portability of electronic learning

  16. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  17. Large Scale Evolution of Convolutional Neural Networks Using Volunteer Computing

    OpenAIRE

    Desell, Travis

    2017-01-01

    This work presents a new algorithm called evolutionary exploration of augmenting convolutional topologies (EXACT), which is capable of evolving the structure of convolutional neural networks (CNNs). EXACT is in part modeled after the neuroevolution of augmenting topologies (NEAT) algorithm, with notable exceptions to allow it to scale to large scale distributed computing environments and evolve networks with convolutional filters. In addition to multithreaded and MPI versions, EXACT has been ...

  18. Computer networking a top-down approach

    CERN Document Server

    Kurose, James

    2017-01-01

    Unique among computer networking texts, the Seventh Edition of the popular Computer Networking: A Top Down Approach builds on the author’s long tradition of teaching this complex subject through a layered approach in a “top-down manner.” The text works its way from the application layer down toward the physical layer, motivating readers by exposing them to important concepts early in their study of networking. Focusing on the Internet and the fundamentally important issues of networking, this text provides an excellent foundation for readers interested in computer science and electrical engineering, without requiring extensive knowledge of programming or mathematics. The Seventh Edition has been updated to reflect the most important and exciting recent advances in networking.

  19. Social network analysis of study environment

    Directory of Open Access Journals (Sweden)

    Blaženka Divjak

    2010-06-01

    Full Text Available Student working environment influences student learning and achievement level. In this respect social aspects of students’ formal and non-formal learning play special role in learning environment. The main research problem of this paper is to find out if students' academic performance influences their position in different students' social networks. Further, there is a need to identify other predictors of this position. In the process of problem solving we use the Social Network Analysis (SNA that is based on the data we collected from the students at the Faculty of Organization and Informatics, University of Zagreb. There are two data samples: in the basic sample N=27 and in the extended sample N=52. We collected data on social-demographic position, academic performance, learning and motivation styles, student status (full-time/part-time, attitudes towards individual and teamwork as well as informal cooperation. Afterwards five different networks (exchange of learning materials, teamwork, informal communication, basic and aggregated social network were constructed. These networks were analyzed with different metrics and the most important were betweenness, closeness and degree centrality. The main result is, firstly, that the position in a social network cannot be forecast only by academic success and, secondly, that part-time students tend to form separate groups that are poorly connected with full-time students. In general, position of a student in social networks in study environment can influence student learning as well as her/his future employability and therefore it is worthwhile to be investigated.

  20. Conceptual metaphors in computer networking terminology ...

    African Journals Online (AJOL)

    Lakoff & Johnson, 1980) is used as a basic framework for analysing and explaining the occurrence of metaphor in the terminology used by computer networking professionals in the information technology (IT) industry. An analysis of linguistic ...

  1. Computer Network Equipment for Intrusion Detection Research

    National Research Council Canada - National Science Library

    Ye, Nong

    2000-01-01

    .... To test the process model, the system-level intrusion detection techniques and the working prototype of the intrusion detection system, a set of computer and network equipment has been purchased...

  2. System/360 Computer Assisted Network Scheduling (CANS) System

    Science.gov (United States)

    Brewer, A. C.

    1972-01-01

    Computer assisted scheduling techniques that produce conflict-free and efficient schedules have been developed and implemented to meet needs of the Manned Space Flight Network. CANS system provides effective management of resources in complex scheduling environment. System is automated resource scheduling, controlling, planning, information storage and retrieval tool.

  3. Computational Complexity of Bosons in Linear Networks

    Science.gov (United States)

    2017-03-01

    AFRL-AFOSR-JP-TR-2017-0020 Computational complexity of bosons in linear networks Andrew White THE UNIVERSITY OF QUEENSLAND Final Report 07/27/2016...DATES COVERED (From - To) 02 Mar 2013 to 01 Mar 2016 4. TITLE AND SUBTITLE Computational complexity of bosons in linear networks 5a.  CONTRACT NUMBER 5b...direct exploration of the effect of partial distinguishability in the complexity class of the resulting sampling distribution. Our demultiplexed source

  4. Open Problems in Network-aware Data Management in Exa-scale Computing and Terabit Networking Era

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Byna, Surendra

    2011-12-06

    Accessing and managing large amounts of data is a great challenge in collaborative computing environments where resources and users are geographically distributed. Recent advances in network technology led to next-generation high-performance networks, allowing high-bandwidth connectivity. Efficient use of the network infrastructure is necessary in order to address the increasing data and compute requirements of large-scale applications. We discuss several open problems, evaluate emerging trends, and articulate our perspectives in network-aware data management.

  5. Human-Computer Interaction and Virtual Environments

    Science.gov (United States)

    Noor, Ahmed K. (Compiler)

    1995-01-01

    The proceedings of the Workshop on Human-Computer Interaction and Virtual Environments are presented along with a list of attendees. The objectives of the workshop were to assess the state-of-technology and level of maturity of several areas in human-computer interaction and to provide guidelines for focused future research leading to effective use of these facilities in the design/fabrication and operation of future high-performance engineering systems.

  6. Computer Networks and African Studies Centers.

    Science.gov (United States)

    Kuntz, Patricia S.

    The use of electronic communication in the 12 Title VI African Studies Centers is discussed, and the networks available for their use are reviewed. It is argued that the African Studies Centers should be on the cutting edge of contemporary electronic communication and that computer networks should be a fundamental aspect of their programs. An…

  7. A computer network attack taxonomy and ontology

    CSIR Research Space (South Africa)

    Van Heerden, RP

    2012-01-01

    Full Text Available of attacks, means that an attack could be mitigated accordingly. The authors extend a previous, initial taxonomy of computer network attacks which forms the basis of a proposed network attack ontology in this paper. The objective of this ontology...

  8. Virtual Network Computing Testbed for Cybersecurity Research

    Science.gov (United States)

    2015-08-17

    Standard Form 298 (Rev 8/98) Prescribed by ANSI Std. Z39.18 212-346-1012 W911NF-12-1-0393 61504-CS-RIP.2 Final Report a. REPORT 14. ABSTRACT 16...Technology, 2007. [8] Pullen, J. M., 2000. The network workbench : network simulation software for academic investigation of Internet concepts. Comput

  9. EFFICIENCY METRICS COMPUTING IN COMBINED SENSOR NETWORKS

    OpenAIRE

    Luntovskyy, Andriy; Vasyutynskyy, Volodymyr

    2014-01-01

    This paper discusses the computer-aided design of combined networks for offices and building automation systems based on diverse wired and wireless standards. The design requirements for these networks are often contradictive and have to consider performance, energy and cost efficiency together. For usual office communication, quality of service is more important. In the wireless sensor networks, the energy efficiency is a critical requirement to ensure their long life, to reduce maintenance ...

  10. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  11. Autonomic computing enabled cooperative networked design

    CERN Document Server

    Wodczak, Michal

    2014-01-01

    This book introduces the concept of autonomic computing driven cooperative networked system design from an architectural perspective. As such it leverages and capitalises on the relevant advancements in both the realms of autonomic computing and networking by welding them closely together. In particular, a multi-faceted Autonomic Cooperative System Architectural Model is defined which incorporates the notion of Autonomic Cooperative Behaviour being orchestrated by the Autonomic Cooperative Networking Protocol of a cross-layer nature. The overall proposed solution not only advocates for the inc

  12. Spontaneous ad hoc mobile cloud computing network.

    Science.gov (United States)

    Lacuesta, Raquel; Lloret, Jaime; Sendra, Sandra; Peñalver, Lourdes

    2014-01-01

    Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes.

  13. Spontaneous Ad Hoc Mobile Cloud Computing Network

    Directory of Open Access Journals (Sweden)

    Raquel Lacuesta

    2014-01-01

    Full Text Available Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes.

  14. Algorithms and networking for computer games

    CERN Document Server

    Smed, Jouni

    2006-01-01

    Algorithms and Networking for Computer Games is an essential guide to solving the algorithmic and networking problems of modern commercial computer games, written from the perspective of a computer scientist. Combining algorithmic knowledge and game-related problems, the authors discuss all the common difficulties encountered in game programming. The first part of the book tackles algorithmic problems by presenting how they can be solved practically. As well as ""classical"" topics such as random numbers, tournaments and game trees, the authors focus on how to find a path in, create the terrai

  15. Computer methods in electric network analysis

    Energy Technology Data Exchange (ETDEWEB)

    Saver, P.; Hajj, I.; Pai, M.; Trick, T.

    1983-06-01

    The computational algorithms utilized in power system analysis have more than just a minor overlap with those used in electronic circuit computer aided design. This paper describes the computer methods that are common to both areas and highlights the differences in application through brief examples. Recognizing this commonality has stimulated the exchange of useful techniques in both areas and has the potential of fostering new approaches to electric network analysis through the interchange of ideas.

  16. Computer network time synchronization the network time protocol

    CERN Document Server

    Mills, David L

    2006-01-01

    What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol

  17. NNETS - NEURAL NETWORK ENVIRONMENT ON A TRANSPUTER SYSTEM

    Science.gov (United States)

    Villarreal, J.

    1994-01-01

    The primary purpose of NNETS (Neural Network Environment on a Transputer System) is to provide users a high degree of flexibility in creating and manipulating a wide variety of neural network topologies at processing speeds not found in conventional computing environments. To accomplish this purpose, NNETS supports back propagation and back propagation related algorithms. The back propagation algorithm used is an implementation of Rumelhart's Generalized Delta Rule. NNETS was developed on the INMOS Transputer. NNETS predefines a Back Propagation Network, a Jordan Network, and a Reinforcement Network to assist users in learning and defining their own networks. The program also allows users to configure other neural network paradigms from the NNETS basic architecture. The Jordan network is basically a feed forward network that has the outputs connected to a pseudo input layer. The state of the network is dependent on the inputs from the environment plus the state of the network. The Reinforcement network learns via a scalar feedback signal called reinforcement. The network propagates forward randomly. The environment looks at the outputs of the network to produce a reinforcement signal that is fed back to the network. NNETS was written for the INMOS C compiler D711B version 1.3 or later (MS-DOS version). A small portion of the software was written in the OCCAM language to perform the communications routing between processors. NNETS is configured to operate on a 4 X 10 array of Transputers in sequence with a Transputer based graphics processor controlled by a master IBM PC 286 (or better) Transputer. A RGB monitor is required which must be capable of 512 X 512 resolution. It must be able to receive red, green, and blue signals via BNC connectors. NNETS is meant for experienced Transputer users only. The program is distributed on 5.25 inch 1.2Mb MS-DOS format diskettes. NNETS was developed in 1991. Transputer and OCCAM are registered trademarks of Inmos Corporation. MS

  18. Social networks a framework of computational intelligence

    CERN Document Server

    Chen, Shyi-Ming

    2014-01-01

    This volume provides the audience with an updated, in-depth and highly coherent material on the conceptually appealing and practically sound information technology of Computational Intelligence applied to the analysis, synthesis and evaluation of social networks. The volume involves studies devoted to key issues of social networks including community structure detection in networks, online social networks, knowledge growth and evaluation, and diversity of collaboration mechanisms.  The book engages a wealth of methods of Computational Intelligence along with well-known techniques of linear programming, Formal Concept Analysis, machine learning, and agent modeling.  Human-centricity is of paramount relevance and this facet manifests in many ways including personalized semantics, trust metric, and personal knowledge management; just to highlight a few of these aspects. The contributors to this volume report on various essential applications including cyber attacks detection, building enterprise social network...

  19. Social Environments, Sexual Networking and Adolescents ...

    African Journals Online (AJOL)

    This study investigated adolescents' social environments, different strategies manipulated for sexual networking and the effect on adolescents' heterosexual relationship in Lagos metropolis. The total sample for the study comprised 80 male and female adolescents randomly selected from two mixed secondary schools.

  20. Managing records in networked environment using EDRMS ...

    African Journals Online (AJOL)

    Managing records in networked environment using EDRMS applications: a case study of Rand Water. ... and archival story of Rand Water representing water sector challenges and opportunities that organisations such as this are faced with in consideration of the ever-improving technologies and strategies at their disposal.

  1. Six Networks on a Universal Neuromorphic Computing Substrate

    Science.gov (United States)

    Pfeil, Thomas; Grübl, Andreas; Jeltsch, Sebastian; Müller, Eric; Müller, Paul; Petrovici, Mihai A.; Schmuker, Michael; Brüderle, Daniel; Schemmel, Johannes; Meier, Karlheinz

    2013-01-01

    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality. PMID:23423583

  2. Resource management in mobile computing environments

    CERN Document Server

    Mavromoustakis, Constandinos X; Mastorakis, George

    2014-01-01

    This book reports the latest advances on the design and development of mobile computing systems, describing their applications in the context of modeling, analysis and efficient resource management. It explores the challenges on mobile computing and resource management paradigms, including research efforts and approaches recently carried out in response to them to address future open-ended issues. The book includes 26 rigorously refereed chapters written by leading international researchers, providing the readers with technical and scientific information about various aspects of mobile computing, from basic concepts to advanced findings, reporting the state-of-the-art on resource management in such environments. It is mainly intended as a reference guide for researchers and practitioners involved in the design, development and applications of mobile computing systems, seeking solutions to related issues. It also represents a useful textbook for advanced undergraduate and graduate courses, addressing special t...

  3. Electricity market price forecasting by grid computing optimizing artificial neural networks

    OpenAIRE

    Niimura, T.; Ozawa, K.; Sakamoto, N.

    2007-01-01

    This paper presents a grid computing approach to parallel-process a neural network time-series model for forecasting electricity market prices. A grid computing environment introduced in a university computing laboratory provides access to otherwise underused computing resources. The grid computing of the neural network model not only processes several times faster than a single iterative process, but also provides chances of improving forecasting accuracy. Results of numerical tests using re...

  4. Professional networking using computer-mediated communication.

    Science.gov (United States)

    Washer, Peter

    Traditionally, professionals have networked with others in their field through attending conferences, professional organizations, direct mailing, and via the workplace. Recently, there have been new possibilities to network with other professionals using the internet. This article looks at the possibilities that the internet offers for professional networking, particularly e-mailing lists, newsgroups and membership databases, and compares them against more traditional methods of professional networking. The different types of computer-mediated communication are discussed and their relative merits and disadvantages are examined. The benefits and potential pitfalls of internet professional networking, as it relates to the nursing profession, are examined. Practical advice is offered on how the internet can be used as a means to foster professional networks of academic, clinical or research interests.

  5. Computational Ecosystems in a Changing Environment

    Science.gov (United States)

    Glance, Natalie; Hogg, Tad; Huberman, Bernardo A.

    We study the adaptive behavior of a computational ecosystem in the presence of time-periodic resource utilities as seen, for example in the day-night load variations of computer use and in the price fluctuations of seasonal products. We do so within the context of the Huberman-Hogg model of such systems. The dynamics is studied for the cases of competitive and cooperative payoff functions with time-modulated resource utilities, and the system’s adaptability is measured by tracking its performance in response to a time-varying environment,

  6. Natural computing for vehicular networks

    OpenAIRE

    Toutouh El Alamin, Jamal

    2016-01-01

    La presente tesis aborda el diseño inteligente de soluciones para el despliegue de redes vehiculares ad-hoc (vehicular ad hoc networks, VANETs). Estas son redes de comunicación inalámbrica formada principalmente por vehículos y elementos de infraestructura vial. Las VANETs ofrecen la oportunidad para desarrollar aplicaciones revolucionarias en el ámbito de la seguridad y eficiencia vial. Al ser un dominio tan novedoso, existe una serie de cuestiones abiertas, como el diseño de la infraestruct...

  7. InSAR Scientific Computing Environment

    Science.gov (United States)

    Gurrola, E. M.; Rosen, P. A.; Sacco, G.; Zebker, H. A.; Simons, M.; Sandwell, D. T.

    2010-12-01

    The InSAR Scientific Computing Environment (ISCE) is a software development effort in its second year within the NASA Advanced Information Systems and Technology program. The ISCE will provide a new computing environment for geodetic image processing for InSAR sensors that will enable scientists to reduce measurements directly from radar satellites and aircraft to new geophysical products without first requiring them to develop detailed expertise in radar processing methods. The environment can serve as the core of a centralized processing center to bring Level-0 raw radar data up to Level-3 data products, but is adaptable to alternative processing approaches for science users interested in new and different ways to exploit mission data. The NRC Decadal Survey-recommended DESDynI mission will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. The InSAR Scientific Computing Environment is planned to become a key element in processing DESDynI data into higher level data products and it is expected to enable a new class of analyses that take greater advantage of the long time and large spatial scales of these new data, than current approaches. At the core of ISCE is both legacy processing software from the JPL/Caltech ROI_PAC repeat-pass interferometry package as well as a new InSAR processing package containing more efficient and more accurate processing algorithms being developed at Stanford for this project that is based on experience gained in developing processors for missions such as SRTM and UAVSAR. Around the core InSAR processing programs we are building object-oriented wrappers to enable their incorporation into a more modern, flexible, extensible software package that is informed by modern programming methods, including rigorous componentization of processing codes, abstraction and generalization of data models, and a robust, intuitive user interface with

  8. Machine learning based Intelligent cognitive network using fog computing

    Science.gov (United States)

    Lu, Jingyang; Li, Lun; Chen, Genshe; Shen, Dan; Pham, Khanh; Blasch, Erik

    2017-05-01

    In this paper, a Cognitive Radio Network (CRN) based on artificial intelligence is proposed to distribute the limited radio spectrum resources more efficiently. The CRN framework can analyze the time-sensitive signal data close to the signal source using fog computing with different types of machine learning techniques. Depending on the computational capabilities of the fog nodes, different features and machine learning techniques are chosen to optimize spectrum allocation. Also, the computing nodes send the periodic signal summary which is much smaller than the original signal to the cloud so that the overall system spectrum source allocation strategies are dynamically updated. Applying fog computing, the system is more adaptive to the local environment and robust to spectrum changes. As most of the signal data is processed at the fog level, it further strengthens the system security by reducing the communication burden of the communications network.

  9. Human-Computer Interaction in Smart Environments

    Science.gov (United States)

    Paravati, Gianluca; Gatteschi, Valentina

    2015-01-01

    Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  10. Computing chemical organizations in biological networks.

    Science.gov (United States)

    Centler, Florian; Kaleta, Christoph; di Fenizio, Pietro Speroni; Dittrich, Peter

    2008-07-15

    Novel techniques are required to analyze computational models of intracellular processes as they increase steadily in size and complexity. The theory of chemical organizations has recently been introduced as such a technique that links the topology of biochemical reaction network models to their dynamical repertoire. The network is decomposed into algebraically closed and self-maintaining subnetworks called organizations. They form a hierarchy representing all feasible system states including all steady states. We present three algorithms to compute the hierarchy of organizations for network models provided in SBML format. Two of them compute the complete organization hierarchy, while the third one uses heuristics to obtain a subset of all organizations for large models. While the constructive approach computes the hierarchy starting from the smallest organization in a bottom-up fashion, the flux-based approach employs self-maintaining flux distributions to determine organizations. A runtime comparison on 16 different network models of natural systems showed that none of the two exhaustive algorithms is superior in all cases. Studying a 'genome-scale' network model with 762 species and 1193 reactions, we demonstrate how the organization hierarchy helps to uncover the model structure and allows to evaluate the model's quality, for example by detecting components and subsystems of the model whose maintenance is not explained by the model. All data and a Java implementation that plugs into the Systems Biology Workbench is available from http://www.minet.uni-jena.de/csb/prj/ot/tools.

  11. International Symposium on Computing and Network Sustainability

    CERN Document Server

    Akashe, Shyam

    2017-01-01

    The book is compilation of technical papers presented at International Research Symposium on Computing and Network Sustainability (IRSCNS 2016) held in Goa, India on 1st and 2nd July 2016. The areas covered in the book are sustainable computing and security, sustainable systems and technologies, sustainable methodologies and applications, sustainable networks applications and solutions, user-centered services and systems and mobile data management. The novel and recent technologies presented in the book are going to be helpful for researchers and industries in their advanced works.

  12. Computation, cryptography, and network security

    CERN Document Server

    Rassias, Michael

    2015-01-01

    Analysis, assessment, and data management are core competencies for operation research analysts. This volume addresses a number of issues and developed methods for improving those skills. It is an outgrowth of a conference held in April 2013 at the Hellenic Military Academy, and brings together a broad variety of mathematical methods and theories with several applications. It discusses directions and pursuits of scientists that pertain to engineering sciences. It is also presents the theoretical background required for algorithms and techniques applied to a large variety of concrete problems. A number of open questions as well as new future areas are also highlighted.   This book will appeal to operations research analysts, engineers, community decision makers, academics, the military community, practitioners sharing the current “state-of-the-art,” and analysts from coalition partners. Topics covered include Operations Research, Games and Control Theory, Computational Number Theory and Information Securi...

  13. Student Motivation in Computer Networking Courses

    Directory of Open Access Journals (Sweden)

    Wen-Jung Hsin

    2007-01-01

    Full Text Available This paper introduces several hands-on projects that have been used to motivate students in learning various computer networking concepts. These projects are shown to be very useful and applicable to the learners’ daily tasks and activities such as emailing, Web browsing, and online shopping and banking, and lead to an unexpected byproduct, self-motivation.

  14. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  15. Student Motivation in Computer Networking Courses

    Directory of Open Access Journals (Sweden)

    Wen-Jung Hsin, PhD

    2007-08-01

    Full Text Available This paper introduces several hands-on projects that have been used to motivate students in learning various computer networking concepts. These projects are shown to be very useful and applicable to the learners’ daily tasks and activities such as emailing, Web browsing, and online shopping and banking, and lead to an unexpected byproduct, self-motivation.

  16. CONCEPTUAL GENERALIZATION OF STRUCTURAL ORGANIZATION OF COMPUTER NETWORKS MEDICAL SCHOOL

    Directory of Open Access Journals (Sweden)

    O. P. Mintser

    2014-01-01

    Full Text Available The basic principles of the structural organization of computer networks in schools are presented. The questions of universities integration’s in the modern infrastructure of the information society are justified. Details the structural organizations of computer networks are presented. The effectiveness of implementing automated library information systems is shown. The big dynamical growths of technical and personal readiness of students to use virtual educational space are presented. In this regard, universities are required to provide advance information on filling the educational environment of modern virtual university, including multimedia resources for industry professional education programs. Based on information and educational environments virtual representations of universities should be formed distributed resource centers that will avoid duplication of effort on the development of innovative educational technologies, will provide a mutual exchange of results and further development of an open continuous professional education, providing accessibility, modularity and mobility training and retraining specialists.

  17. Grid Computing Environment using a Beowulf Cluster

    Science.gov (United States)

    Alanis, Fransisco; Mahmood, Akhtar

    2003-10-01

    Custom-made Beowulf clusters using PCs are currently replacing expensive supercomputers to carry out complex scientific computations. At the University of Texas - Pan American, we built a 8 Gflops Beowulf Cluster for doing HEP research using RedHat Linux 7.3 and the LAM-MPI middleware. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes that were compiled in C on the cluster using the LAM-XMPI graphics user environment. We will demonstrate a "simple" prototype grid environment, where we will submit and run parallel jobs remotely across multiple cluster nodes over the internet from the presentation room at Texas Tech. University. The Sphinx Beowulf Cluster will be used for monte-carlo grid test-bed studies for the LHC-ATLAS high energy physics experiment. Grid is a new IT concept for the next generation of the "Super Internet" for high-performance computing. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  18. Enhancing the Understanding of Computer Networking Courses through Software Tools

    OpenAIRE

    Dafalla, Z. I.; Balaji, R. D.

    2015-01-01

    Computer networking is an important specialization in Information and Communication Technologies. However imparting the right knowledge to students can be a challenging task due to the fact that there is not enough time to deliver lengthy labs during normal lecture hours. Augmenting the use of physical machines with software tools help the students to learn beyond the limited lab sessions within the environment of higher Institutions of learning throughout the world. The Institutions focus mo...

  19. Synchronized Pair Configuration in Virtualization-Based Lab for Learning Computer Networks

    Science.gov (United States)

    Kongcharoen, Chaknarin; Hwang, Wu-Yuin; Ghinea, Gheorghita

    2017-01-01

    More studies are concentrating on using virtualization-based labs to facilitate computer or network learning concepts. Some benefits are lower hardware costs and greater flexibility in reconfiguring computer and network environments. However, few studies have investigated effective mechanisms for using virtualization fully for collaboration.…

  20. Non-harmful insertion of data mimicking computer network attacks

    Energy Technology Data Exchange (ETDEWEB)

    Neil, Joshua Charles; Kent, Alexander; Hash, Jr, Curtis Lee

    2016-06-21

    Non-harmful data mimicking computer network attacks may be inserted in a computer network. Anomalous real network connections may be generated between a plurality of computing systems in the network. Data mimicking an attack may also be generated. The generated data may be transmitted between the plurality of computing systems using the real network connections and measured to determine whether an attack is detected.

  1. [Renewal of NIHS computer network system].

    Science.gov (United States)

    Segawa, Katsunori; Nakano, Tatsuya; Saito, Yoshiro

    2012-01-01

    Updated version of National Institute of Health Sciences Computer Network System (NIHS-NET) is described. In order to reduce its electric power consumption, the main server system was newly built using the virtual machine technology. The service that each machine provided in the previous network system should be maintained as much as possible. Thus, the individual server was constructed for each service, because a virtual server often show decrement in its performance as compared with a physical server. As a result, though the number of virtual servers was increased and the network communication became complicated among the servers, the conventional service was able to be maintained, and security level was able to be rather improved, along with saving electrical powers. The updated NIHS-NET bears multiple security countermeasures. To maximal use of these measures, awareness for the network security by all users is expected.

  2. Computer simulation of spacecraft/environment interaction

    CERN Document Server

    Krupnikov, K K; Mileev, V N; Novikov, L S; Sinolits, V V

    1999-01-01

    This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991-1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language.

  3. Requirements for a Network Storage Service in a supercomputer environment

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, S.M.

    1991-09-26

    Sandia National Laboratories has completed a requirements study for a networked mass storage system. The areas of user functionality, network connectivity, and performance were analyzed to determine specifications for a Network Storage Service to operate in supercomputer environment. 4 refs.

  4. Human-Computer Interaction in Smart Environments

    Directory of Open Access Journals (Sweden)

    Gianluca Paravati

    2015-08-01

    Full Text Available Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  5. An Optimal Path Computation Architecture for the Cloud-Network on Software-Defined Networking

    Directory of Open Access Journals (Sweden)

    Hyunhun Cho

    2015-05-01

    Full Text Available Legacy networks do not open the precise information of the network domain because of scalability, management and commercial reasons, and it is very hard to compute an optimal path to the destination. According to today’s ICT environment change, in order to meet the new network requirements, the concept of software-defined networking (SDN has been developed as a technological alternative to overcome the limitations of the legacy network structure and to introduce innovative concepts. The purpose of this paper is to propose the application that calculates the optimal paths for general data transmission and real-time audio/video transmission, which consist of the major services of the National Research & Education Network (NREN in the SDN environment. The proposed SDN routing computation (SRC application is designed and applied in a multi-domain network for the efficient use of resources, selection of the optimal path between the multi-domains and optimal establishment of end-to-end connections.

  6. A Three-Dimensional Computational Model of Collagen Network Mechanics

    Science.gov (United States)

    Lee, Byoungkoo; Zhou, Xin; Riching, Kristin; Eliceiri, Kevin W.; Keely, Patricia J.; Guelcher, Scott A.; Weaver, Alissa M.; Jiang, Yi

    2014-01-01

    Extracellular matrix (ECM) strongly influences cellular behaviors, including cell proliferation, adhesion, and particularly migration. In cancer, the rigidity of the stromal collagen environment is thought to control tumor aggressiveness, and collagen alignment has been linked to tumor cell invasion. While the mechanical properties of collagen at both the single fiber scale and the bulk gel scale are quite well studied, how the fiber network responds to local stress or deformation, both structurally and mechanically, is poorly understood. This intermediate scale knowledge is important to understanding cell-ECM interactions and is the focus of this study. We have developed a three-dimensional elastic collagen fiber network model (bead-and-spring model) and studied fiber network behaviors for various biophysical conditions: collagen density, crosslinker strength, crosslinker density, and fiber orientation (random vs. prealigned). We found the best-fit crosslinker parameter values using shear simulation tests in a small strain region. Using this calibrated collagen model, we simulated both shear and tensile tests in a large linear strain region for different network geometry conditions. The results suggest that network geometry is a key determinant of the mechanical properties of the fiber network. We further demonstrated how the fiber network structure and mechanics evolves with a local formation, mimicking the effect of pulling by a pseudopod during cell migration. Our computational fiber network model is a step toward a full biomechanical model of cellular behaviors in various ECM conditions. PMID:25386649

  7. Fuzzy logic, neural networks, and soft computing

    Science.gov (United States)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial

  8. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    Science.gov (United States)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  9. Spiking network simulation code for petascale computers

    Science.gov (United States)

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  10. Spiking network simulation code for petascale computers.

    Science.gov (United States)

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.

  11. International Symposium on Complex Computing-Networks

    CERN Document Server

    Sevgi, L; CCN2005; Complex computing networks: Brain-like and wave-oriented electrodynamic algorithms

    2006-01-01

    This book uniquely combines new advances in the electromagnetic and the circuits&systems theory. It integrates both fields regarding computational aspects of common interest. Emphasized subjects are those methods which mimic brain-like and electrodynamic behaviour; among these are cellular neural networks, chaos and chaotic dynamics, attractor-based computation and stream ciphers. The book contains carefully selected contributions from the Symposium CCN2005. Pictures from the bestowal of Honorary Doctorate degrees to Leon O. Chua and Leopold B. Felsen are included.

  12. InSAR Scientific Computing Environment on the Cloud

    Science.gov (United States)

    Rosen, P. A.; Shams, K. S.; Gurrola, E. M.; George, B. A.; Knight, D. S.

    2012-12-01

    orchestrate jobs across a large number of machines. We executed ISCE in a distributed cloud environment across a multiplicity of machines with a variety of characteristics in CPU, memory, virtualization technology, and disk and network I/O models to assess the optimal execution environment for ISCE. In this paper, we describe our results and project the implications of cloud computing on InSAR processing applications into the future. We also present the distributed architecture and the orchestration framework to perform time-series analysis with ISCE. The authors would like to thank the Earth Science Technology Office at NASA for support of the ISCE software development, and the High End Computing program for cloud computing support. This work was performed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA.

  13. Fast computation of minimum hybridization networks.

    Science.gov (United States)

    Albrecht, Benjamin; Scornavacca, Celine; Cenci, Alberto; Huson, Daniel H

    2012-01-15

    Hybridization events in evolution may lead to incongruent gene trees. One approach to determining possible interspecific hybridization events is to compute a hybridization network that attempts to reconcile incongruent gene trees using a minimum number of hybridization events. We describe how to compute a representative set of minimum hybridization networks for two given bifurcating input trees, using a parallel algorithm and provide a user-friendly implementation. A simulation study suggests that our program performs significantly better than existing software on biologically relevant data. Finally, we demonstrate the application of such methods in the context of the evolution of the Aegilops/Triticum genera. The algorithm is implemented in the program Dendroscope 3, which is freely available from www.dendroscope.org and runs on all three major operating systems.

  14. Integrating Wireless Sensor Networks with Computational Grids

    Science.gov (United States)

    Preve, Nikolaos

    Wireless sensor networks (WSNs) have been greatly developed and emerged their significance in a wide range of important applications such as ac quisition and process in formation from the physical world. The evolvement of Grid computing has been based on coordination of distributed and shared re sources. A Sensor Grid network can integrate these two leading technologies enabling real-time sensor data collection, the sharing of computational and stor age grid resources for sensor data processing and management. Several issues have occurred from this integration which dispute the modern design of sensor grids. In order to address these issues, in this paper we propose a sensor grid ar chitecture supporting it by a testbed which focuses on the design issues and on the improvement of our sensor grid architecture design.

  15. Reach and get capability in a computing environment

    Science.gov (United States)

    Bouchard, Ann M [Albuquerque, NM; Osbourn, Gordon C [Albuquerque, NM

    2012-06-05

    A reach and get technique includes invoking a reach command from a reach location within a computing environment. A user can then navigate to an object within the computing environment and invoke a get command on the object. In response to invoking the get command, the computing environment is automatically navigated back to the reach location and the object copied into the reach location.

  16. The research of computer network security and protection strategy

    Science.gov (United States)

    He, Jian

    2017-05-01

    With the widespread popularity of computer network applications, its security is also received a high degree of attention. Factors affecting the safety of network is complex, for to do a good job of network security is a systematic work, has the high challenge. For safety and reliability problems of computer network system, this paper combined with practical work experience, from the threat of network security, security technology, network some Suggestions and measures for the system design principle, in order to make the masses of users in computer networks to enhance safety awareness and master certain network security technology.

  17. Computer Applications and Virtual Environments (CAVE)

    Science.gov (United States)

    1993-01-01

    Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. Marshall SPace Flight Center (MSFC) is begirning to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's Computer Applications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models are used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup is to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provides general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC).

  18. Using satellite communications for a mobile computer network

    Science.gov (United States)

    Wyman, Douglas J.

    1993-01-01

    The topics discussed include the following: patrol car automation, mobile computer network, network requirements, network design overview, MCN mobile network software, MCN hub operation, mobile satellite software, hub satellite software, the benefits of patrol car automation, the benefits of satellite mobile computing, and national law enforcement satellite.

  19. Analysis of Computer Network Information Based on "Big Data"

    Science.gov (United States)

    Li, Tianli

    2017-11-01

    With the development of the current era, computer network and large data gradually become part of the people's life, people use the computer to provide convenience for their own life, but at the same time there are many network information problems has to pay attention. This paper analyzes the information security of computer network based on "big data" analysis, and puts forward some solutions.

  20. Design and implementation of a local computer network

    Energy Technology Data Exchange (ETDEWEB)

    Fortune, P. J.; Lidinsky, W. P.; Zelle, B. R.

    1977-01-01

    An intralaboratory computer communications network was designed and is being implemented at Argonne National Laboratory. Parameters which were considered to be important in the network design are discussed; and the network, including its hardware and software components, is described. A discussion of the relationship between computer networks and distributed processing systems is also presented. The problems which the network is designed to solve and the consequent network structure represent considerations which are of general interest. 5 figures.

  1. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Yonghwee Kim; Sharad Yadav

    2006-08-31

    A novel methodology for delineating multiple reservoir domains for the purpose of history matching in a distributed computing environment has been proposed. A fully probabilistic approach to perturb permeability within the delineated zones is implemented. The combination of robust schemes for identifying reservoir zones and distributed computing significantly increase the accuracy and efficiency of the probabilistic approach. The information pertaining to the permeability variations in the reservoir that is contained in dynamic data is calibrated in terms of a deformation parameter rD. This information is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, well configuration, flow constrains etc. The probabilistic approach then has to account for multiple r{sub D} values in different regions of the reservoir. In order to delineate reservoir domains that can be characterized with different r{sub D} parameters, principal component analysis (PCA) of the Hessian matrix has been done. The Hessian matrix summarizes the sensitivity of the objective function at a given step of the history matching to model parameters. It also measures the interaction of the parameters in affecting the objective function. The basic premise of PC analysis is to isolate the most sensitive and least correlated regions. The eigenvectors obtained during the PCA are suitably scaled and appropriate grid block volume cut-offs are defined such that the resultant domains are neither too large (which increases interactions between domains) nor too small (implying ineffective history matching). The delineation of domains requires calculation of Hessian, which could be computationally costly and as well as restricts the current

  2. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Sharad Yadav

    2005-10-01

    A novel methodology for delineating multiple reservoir domains for the purpose of history matching in a distributed computing environment has been proposed. A fully probabilistic approach to perturb permeability within the delineated zones is implemented. The combination of robust schemes for identifying reservoir zones and distributed computing significantly increase the accuracy and efficiency of the probabilistic approach. The information pertaining to the permeability variations in the reservoir that is contained in dynamic data is calibrated in terms of a deformation parameter rD. This information is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, well configuration, flow constrains etc. The probabilistic approach then has to account for multiple r{sub D} values in different regions of the reservoir. In order to delineate reservoir domains that can be characterized with different rD parameters, principal component analysis (PCA) of the Hessian matrix has been done. The Hessian matrix summarizes the sensitivity of the objective function at a given step of the history matching to model parameters. It also measures the interaction of the parameters in affecting the objective function. The basic premise of PC analysis is to isolate the most sensitive and least correlated regions. The eigenvectors obtained during the PCA are suitably scaled and appropriate grid block volume cut-offs are defined such that the resultant domains are neither too large (which increases interactions between domains) nor too small (implying ineffective history matching). The delineation of domains requires calculation of Hessian, which could be computationally costly and as well as restricts the current approach to

  3. Stochastic Simulation of Biomolecular Networks in Dynamic Environments.

    Science.gov (United States)

    Voliotis, Margaritis; Thomas, Philipp; Grima, Ramon; Bowsher, Clive G

    2016-06-01

    Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate-using decision-making by a large population of quorum sensing bacteria-that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits.

  4. Stochastic Simulation of Biomolecular Networks in Dynamic Environments.

    Directory of Open Access Journals (Sweden)

    Margaritis Voliotis

    2016-06-01

    Full Text Available Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate-using decision-making by a large population of quorum sensing bacteria-that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits.

  5. Scholarly Communication in the Network Environment: Issues of Principle, Policy and Practice.

    Science.gov (United States)

    Kahin, Brian

    1992-01-01

    Discussion of legal and ethical issues raised by the growth of research networking focuses on two general areas: (1) communication, prepublication, and publication; and (2) the network as a distribution environment. Issues considered include joint authorship, rights in computer conferencing, derivative works, control of dissemination, site…

  6. Proposed Network Intrusion Detection System ‎In Cloud Environment Based on Back ‎Propagation Neural Network

    Directory of Open Access Journals (Sweden)

    Shawq Malik Mehibs

    2017-12-01

    Full Text Available Cloud computing is distributed architecture, providing computing facilities and storage resource as a service over the internet. This low-cost service fulfills the basic requirements of users. Because of the open nature and services introduced by cloud computing intruders impersonate legitimate users and misuse cloud resource and services. To detect intruders and suspicious activities in and around the cloud computing environment, intrusion detection system used to discover the illegitimate users and suspicious action by monitors different user activities on the network .this work proposed based back propagation artificial neural network to construct t network intrusion detection in the cloud environment. The proposed module evaluated with kdd99 dataset the experimental results shows promising approach to detect attack with high detection rate and low false alarm rate

  7. Computational capabilities of graph neural networks.

    Science.gov (United States)

    Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele

    2009-01-01

    In this paper, we will consider the approximation properties of a recently introduced neural network model called graph neural network (GNN), which can be used to process-structured data inputs, e.g., acyclic graphs, cyclic graphs, and directed or undirected graphs. This class of neural networks implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n onto an m-dimensional Euclidean space. We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks (FNNs). Some experimental examples are used to show the computational capabilities of the proposed model.

  8. Distributed computations in a dynamic, heterogeneous Grid environment

    Science.gov (United States)

    Dramlitsch, Thomas

    2003-06-01

    In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and

  9. Aluminium in Biological Environments: A Computational Approach

    Science.gov (United States)

    Mujika, Jon I; Rezabal, Elixabete; Mercero, Jose M; Ruipérez, Fernando; Costa, Dominique; Ugalde, Jesus M; Lopez, Xabier

    2014-01-01

    The increased availability of aluminium in biological environments, due to human intervention in the last century, raises concerns on the effects that this so far “excluded from biology” metal might have on living organisms. Consequently, the bioinorganic chemistry of aluminium has emerged as a very active field of research. This review will focus on our contributions to this field, based on computational studies that can yield an understanding of the aluminum biochemistry at a molecular level. Aluminium can interact and be stabilized in biological environments by complexing with both low molecular mass chelants and high molecular mass peptides. The speciation of the metal is, nonetheless, dictated by the hydrolytic species dominant in each case and which vary according to the pH condition of the medium. In blood, citrate and serum transferrin are identified as the main low molecular mass and high molecular mass molecules interacting with aluminium. The complexation of aluminium to citrate and the subsequent changes exerted on the deprotonation pathways of its tritable groups will be discussed along with the mechanisms for the intake and release of aluminium in serum transferrin at two pH conditions, physiological neutral and endosomatic acidic. Aluminium can substitute other metals, in particular magnesium, in protein buried sites and trigger conformational disorder and alteration of the protonation states of the protein's sidechains. A detailed account of the interaction of aluminium with proteic sidechains will be given. Finally, it will be described how alumnium can exert oxidative stress by stabilizing superoxide radicals either as mononuclear aluminium or clustered in boehmite. The possibility of promotion of Fenton reaction, and production of hydroxyl radicals will also be discussed. PMID:24757505

  10. ALUMINIUM IN BIOLOGICAL ENVIRONMENTS: A COMPUTATIONAL APPROACH

    Directory of Open Access Journals (Sweden)

    Jon I Mujika

    2014-03-01

    Full Text Available The increased availability of aluminium in biological environments, due to human intervention in the last century, raises concerns on the effects that this so far “excluded from biology” metal might have on living organisms. Consequently, the bioinorganic chemistry of aluminium has emerged as a very active field of research. This review will focus on our contributions to this field, based on computational studies that can yield an understanding of the aluminum biochemistry at a molecular level. Aluminium can interact and be stabilized in biological environments by complexing with both low molecular mass chelants and high molecular mass peptides. The speciation of the metal is, nonetheless, dictated by the hydrolytic species dominant in each case and which vary according to the pH condition of the medium. In blood, citrate and serum transferrin are identified as the main low molecular mass and high molecular mass molecules interacting with aluminium. The complexation of aluminium to citrate and the subsequent changes exerted on the deprotonation pathways of its tritable groups will be discussed along with the mechanisms for the intake and release of aluminium in serum transferrin at two pH conditions, physiological neutral and endosomatic acidic. Aluminium can substitute other metals, in particular magnesium, in protein buried sites and trigger conformational disorder and alteration of the protonation states of the protein's sidechains. A detailed account of the interaction of aluminium with proteic sidechains will be given. Finally, it will be described how alumnium can exert oxidative stress by stabilizing superoxide radicals either as mononuclear aluminium or clustered in boehmite. The possibility of promotion of Fenton reaction, and production of hydroxyl radicals will also be discussed.

  11. Social sciences via network analysis and computation

    CERN Document Server

    Kanduc, Tadej

    2015-01-01

    In recent years information and communication technologies have gained significant importance in the social sciences. Because there is such rapid growth of knowledge, methods and computer infrastructure, research can now seamlessly connect interdisciplinary fields such as business process management, data processing and mathematics. This study presents some of the latest results, practices and state-of-the-art approaches in network analysis, machine learning, data mining, data clustering and classifications in the contents of social sciences. It also covers various real-life examples such as t

  12. Computer network security and cyber ethics

    CERN Document Server

    Kizza, Joseph Migga

    2014-01-01

    In its 4th edition, this book remains focused on increasing public awareness of the nature and motives of cyber vandalism and cybercriminals, the weaknesses inherent in cyberspace infrastructure, and the means available to protect ourselves and our society. This new edition aims to integrate security education and awareness with discussions of morality and ethics. The reader will gain an understanding of how the security of information in general and of computer networks in particular, on which our national critical infrastructure and, indeed, our lives depend, is based squarely on the individ

  13. Some queuing network models of computer systems

    Science.gov (United States)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  14. WEB BASED LEARNING OF COMPUTER NETWORK COURSE

    Directory of Open Access Journals (Sweden)

    Hakan KAPTAN

    2004-04-01

    Full Text Available As a result of developing on Internet and computer fields, web based education becomes one of the area that many improving and research studies are done. In this study, web based education materials have been explained for multimedia animation and simulation aided Computer Networks course in Technical Education Faculties. Course content is formed by use of university course books, web based education materials and technology web pages of companies. Course content is formed by texts, pictures and figures to increase motivation of students and facilities of learning some topics are supported by animations. Furthermore to help working principles of routing algorithms and congestion control algorithms simulators are constructed in order to interactive learning

  15. Amigo - Ambient Intelligence for the networked home environment

    NARCIS (Netherlands)

    Janse, M.D.

    2008-01-01

    The Amigo project develops open, standardized, interoperable middleware and attractive user services for the networked home environment. Fifteen of Europe's leading companies and research organizations in mobile and home networking, software development, consumer electronics and domestic appliances

  16. Interaction Network Estimation: Predicting Problem-Solving Diversity in Interactive Environments

    Science.gov (United States)

    Eagle, Michael; Hicks, Drew; Barnes, Tiffany

    2015-01-01

    Intelligent tutoring systems and computer aided learning environments aimed at developing problem solving produce large amounts of transactional data which make it a challenge for both researchers and educators to understand how students work within the environment. Researchers have modeled student-tutor interactions using complex networks in…

  17. Choice Of Computer Networking Cables And Their Effect On Data ...

    African Journals Online (AJOL)

    Computer networking is the order of the day in this Information and Communication Technology (ICT) age. Although a network can be through a wireless device most local connections are done using cables. There are three main computer-networking cables namely coaxial cable, unshielded twisted pair cable and the optic ...

  18. AN EVALUATION AND IMPLEMENTATION OF COLLABORATIVE AND SOCIAL NETWORKING TECHNOLOGIES FOR COMPUTER EDUCATION

    Directory of Open Access Journals (Sweden)

    Ronnie Cheung

    2011-06-01

    Full Text Available We have developed a collaborative and social networking environment that integrates the knowledge and skills in communication and computing studies with a multimedia development project. The outcomes of the students’ projects show that computer literacy can be enhanced through a cluster of communication, social, and digital skills. Experience in implementing a web-based social networking environment shows that the new media is an effective means of enriching knowledge by sharing in computer literacy projects. The completed assignments, projects, and self-reflection reports demonstrate that the students were able to achieve the learning outcomes of a computer literacy course in multimedia development. The students were able to assess the effectiveness of a variety of media through the development of media presentations in a web-based, social-networking environment. In the collaborative and social-networking environment, students were able to collaborate and communicate with their team members to solve problems, resolve conflicts, make decisions, and work as a team to complete tasks. Our experience has shown that social networking environments are effective for computer literacy education, and the development of the new media is emerging as the core knowledge for computer literacy education.

  19. Computational Aspects of Sensor Network Protocols (Distributed Sensor Network Simulator

    Directory of Open Access Journals (Sweden)

    Vasanth Iyer

    2009-08-01

    Full Text Available In this work, we model the sensor networks as an unsupervised learning and clustering process. We classify nodes according to its static distribution to form known class densities (CCPD. These densities are chosen from specific cross-layer features which maximizes lifetime of power-aware routing algorithms. To circumvent computational complexities of a power-ware communication STACK we introduce path-loss models at the nodes only for high density deployments. We study the cluster heads and formulate the data handling capacity for an expected deployment and use localized probability models to fuse the data with its side information before transmission. So each cluster head has a unique Pmax but not all cluster heads have the same measured value. In a lossless mode if there are no faults in the sensor network then we can show that the highest probability given by Pmax is ambiguous if its frequency is ≤ n/2 otherwise it can be determined by a local function. We further show that the event detection at the cluster heads can be modelled with a pattern 2m and m, the number of bits can be a correlated pattern of 2 bits and for a tight lower bound we use 3-bit Huffman codes which have entropy < 1. These local algorithms are further studied to optimize on power, fault detection and to maximize on the distributed routing algorithm used at the higher layers. From these bounds in large network, it is observed that the power dissipation is network size invariant. The performance of the routing algorithms solely based on success of finding healthy nodes in a large distribution. It is also observed that if the network size is kept constant and the density of the nodes is kept closer then the local pathloss model effects the performance of the routing algorithms. We also obtain the maximum intensity of transmitting nodes for a given category of routing algorithms for an outage constraint, i.e., the lifetime of sensor network.

  20. On Distributed Computation in Noisy Random Planar Networks

    OpenAIRE

    Kanoria, Y.; Manjunath, D.

    2007-01-01

    We consider distributed computation of functions of distributed data in random planar networks with noisy wireless links. We present a new algorithm for computation of the maximum value which is order optimal in the number of transmissions and computation time.We also adapt the histogram computation algorithm of Ying et al to make the histogram computation time optimal.

  1. Mobile Computing and Ubiquitous Networking: Concepts, Technologies and Challenges.

    Science.gov (United States)

    Pierre, Samuel

    2001-01-01

    Analyzes concepts, technologies and challenges related to mobile computing and networking. Defines basic concepts of cellular systems. Describes the evolution of wireless technologies that constitute the foundations of mobile computing and ubiquitous networking. Presents characterization and issues of mobile computing. Analyzes economical and…

  2. Chemical Reaction Networks for Computing Polynomials.

    Science.gov (United States)

    Salehi, Sayed Ahmad; Parhi, Keshab K; Riedel, Marc D

    2017-01-20

    Chemical reaction networks (CRNs) provide a fundamental model in the study of molecular systems. Widely used as formalism for the analysis of chemical and biochemical systems, CRNs have received renewed attention as a model for molecular computation. This paper demonstrates that, with a new encoding, CRNs can compute any set of polynomial functions subject only to the limitation that these functions must map the unit interval to itself. These polynomials can be expressed as linear combinations of Bernstein basis polynomials with positive coefficients less than or equal to 1. In the proposed encoding approach, each variable is represented using two molecular types: a type-0 and a type-1. The value is the ratio of the concentration of type-1 molecules to the sum of the concentrations of type-0 and type-1 molecules. The proposed encoding naturally exploits the expansion of a power-form polynomial into a Bernstein polynomial. Molecular encoders for converting any input in a standard representation to the fractional representation as well as decoders for converting the computed output from the fractional to a standard representation are presented. The method is illustrated first for generic CRNs; then chemical reactions designed for an example are mapped to DNA strand-displacement reactions.

  3. Security Technologies for Open Networking Environments (STONE)

    Energy Technology Data Exchange (ETDEWEB)

    Muftic, Sead

    2005-03-31

    Under this project SETECS performed research, created the design, and the initial prototype of three groups of security technologies: (a) middleware security platform, (b) Web services security, and (c) group security system. The results of the project indicate that the three types of security technologies can be used either individually or in combination, which enables effective and rapid deployment of a number of secure applications in open networking environments. The middleware security platform represents a set of object-oriented security components providing various functions to handle basic cryptography, X.509 certificates, S/MIME and PKCS No.7 encapsulation formats, secure communication protocols, and smart cards. The platform has been designed in the form of security engines, including a Registration Engine, Certification Engine, an Authorization Engine, and a Secure Group Applications Engine. By creating a middleware security platform consisting of multiple independent components the following advantages have been achieved - Object-oriented, Modularity, Simplified Development, and testing, Portability, and Simplified extensions. The middleware security platform has been fully designed and a preliminary Java-based prototype has been created for the Microsoft Windows operating system. The Web services security system, designed in the project, consists of technologies and applications that provide authentication (i.e., single sign), authorization, and federation of identities in an open networking environment. The system is based on OASIS SAML and XACML standards for secure Web services. Its topology comprises three major components: Domain Security Server (DSS) is the main building block of the system Secure Application Server (SAS) Secure Client In addition to the SAML and XACML engines, the authorization system consists of two sets of components An Authorization Administration System An Authorization Enforcement System Federation of identities in multi

  4. Wireless sensor networks in a maritime environment.

    NARCIS (Netherlands)

    Kavelaars, W.; Maris, M.

    2005-01-01

    In the recent years, there has been a lot of research on sensor networks-and their applications. In particular for monitoring large and potentially hostile areas these networks have proven to be very useful. The technique of land-based sensor networks can be extrapolated to sensing buoys at sea or

  5. Planning and management of cloud computing networks

    Science.gov (United States)

    Larumbe, Federico

    The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a

  6. 2013 International Conference on Computer Engineering and Network

    CERN Document Server

    Zhu, Tingshao

    2014-01-01

    This book aims to examine innovation in the fields of computer engineering and networking. The book covers important emerging topics in computer engineering and networking, and it will help researchers and engineers improve their knowledge of state-of-art in related areas. The book presents papers from The Proceedings of the 2013 International Conference on Computer Engineering and Network (CENet2013) which was held on July 20-21, in Shanghai, China.

  7. AUTOMATIC CONTROL OF INTELLECTUAL RIGHTS IN THE GLOBAL COMPUTER NETWORKS

    OpenAIRE

    Anatoly P. Yakimaho; Victoriya V. Bessarabova

    2013-01-01

    The problems of use of subjects of intellectual property in the global computer networks are stated. The main attention is focused on the ways of problems solutions arising during the work in computer networks. Legal problems of information society are considered. The analysis of global computer networks as places for the organization of collective management by copyrights in the world scale is carried out. Issues of creation of a system of automatic control of property rights of authors and ...

  8. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  9. InSAR Scientific Computing Environment

    Science.gov (United States)

    Rosen, Paul A.; Sacco, Gian Franco; Gurrola, Eric M.; Zabker, Howard A.

    2011-01-01

    This computing environment is the next generation of geodetic image processing technology for repeat-pass Interferometric Synthetic Aperture (InSAR) sensors, identified by the community as a needed capability to provide flexibility and extensibility in reducing measurements from radar satellites and aircraft to new geophysical products. This software allows users of interferometric radar data the flexibility to process from Level 0 to Level 4 products using a variety of algorithms and for a range of available sensors. There are many radar satellites in orbit today delivering to the science community data of unprecedented quantity and quality, making possible large-scale studies in climate research, natural hazards, and the Earth's ecosystem. The proposed DESDynI mission, now under consideration by NASA for launch later in this decade, would provide time series and multiimage measurements that permit 4D models of Earth surface processes so that, for example, climate-induced changes over time would become apparent and quantifiable. This advanced data processing technology, applied to a global data set such as from the proposed DESDynI mission, enables a new class of analyses at time and spatial scales unavailable using current approaches. This software implements an accurate, extensible, and modular processing system designed to realize the full potential of InSAR data from future missions such as the proposed DESDynI, existing radar satellite data, as well as data from the NASA UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar), and other airborne platforms. The processing approach has been re-thought in order to enable multi-scene analysis by adding new algorithms and data interfaces, to permit user-reconfigurable operation and extensibility, and to capitalize on codes already developed by NASA and the science community. The framework incorporates modern programming methods based on recent research, including object-oriented scripts controlling legacy and

  10. Parallel Evolutionary Peer-to-Peer Networking in Realistic Environments

    Directory of Open Access Journals (Sweden)

    Kei Ohnishi

    2017-01-01

    Full Text Available In the present paper we first conduct simulations of the parallel evolutionary peer-to-peer (P2P networking technique (referred to as P-EP2P that we previously proposed using models of realistic environments to examine if P-EP2P is practical. Environments are here represented by what users have and want in the network, and P-EP2P adapts the P2P network topologies to the present environment in an evolutionary manner. The simulation results show that P-EP2P is hard to adapt the network topologies to some realistic environments. Then, based on the discussions of the results, we propose a strategy for better adaptability of P-EP2P to the realistic environments. The strategy first judges if evolutionary adaptation of the network topologies is likely to occur in the present environment, and if it judges so, it actually tries to achieve evolutionary adaptation of the network topologies. Otherwise, it brings random change to the network topologies. The simulation results indicate that P-EP2P with the proposed strategy can better adapt the network topologies to the realistic environments. The main contribution of the study is to present such a promising way to realize an evolvable network in which the evolution direction is given by users.

  11. Mobility management for SIP sessions in a heterogeneous network environment

    NARCIS (Netherlands)

    Romijn, Willem A.; Plas, Dirk-Jaap; Bijwaard, D.; Meeuwissen, Erik; van Ooijen, Gijs

    2004-01-01

    The next generation of communication networks is expected to create a heterogeneous network environment encompassing an ever-increasing number of different access networks and end-user terminals that will enable the introduction of and provide access to numerous feature-rich end-user services. It is

  12. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    Science.gov (United States)

    Rusakov, Dmitri A.; Savtchenko, Leonid P.

    2017-01-01

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877

  13. DETECTING NETWORK ATTACKS IN COMPUTER NETWORKS BY USING DATA MINING METHODS

    OpenAIRE

    Platonov, V. V.; Semenov, P. O.

    2016-01-01

    The article describes an approach to the development of an intrusion detection system for computer networks. It is shown that the usage of several data mining methods and tools can improve the efficiency of protection computer networks against network at-tacks due to the combination of the benefits of signature detection and anomalies detection and the opportunity of adaptation the sys-tem for hardware and software structure of the computer network.

  14. Email networks and the spread of computer viruses

    Science.gov (United States)

    Newman, M. E.; Forrest, Stephanie; Balthrop, Justin

    2002-09-01

    Many computer viruses spread via electronic mail, making use of computer users' email address books as a source for email addresses of new victims. These address books form a directed social network of connections between individuals over which the virus spreads. Here we investigate empirically the structure of this network using data drawn from a large computer installation, and discuss the implications of this structure for the understanding and prevention of computer virus epidemics.

  15. An Overview of Computer Network security and Research Technology

    OpenAIRE

    Rathore, Vandana

    2016-01-01

    The rapid development in the field of computer networks and systems brings both convenience and security threats for users. Security threats include network security and data security. Network security refers to the reliability, confidentiality, integrity and availability of the information in the system. The main objective of network security is to maintain the authenticity, integrity, confidentiality, availability of the network. This paper introduces the details of the technologies used in...

  16. Computers and the Environment: Minimizing the Carbon Footprint

    Science.gov (United States)

    Kaestner, Rich

    2009-01-01

    Computers can be good and bad for the environment; one can maximize the good and minimize the bad. When dealing with environmental issues, it's difficult to ignore the computing infrastructure. With an operations carbon footprint equal to the airline industry's, computer energy use is only part of the problem; everyone is also dealing with the use…

  17. Integrating labview into a distributed computing environment.

    Energy Technology Data Exchange (ETDEWEB)

    Kasemir, K. U. (Kay-Uwe); Pieck, M. (Martin); Dalesio, L. R. (Leo R.)

    2001-01-01

    Being easy to learn and well suited for a selfcontained desktop laboratory setup, many casual programmers prefer to use the National Instruments Lab-VIEW environment to develop their logic. An ActiveX interface is presented that allows integration into a plant-wide distributed environment based on the Experimental Physics and Industrial Control System (EPICS). This paper discusses the design decisions and provides performance information, especially considering requirements for the Spallation Neutron Source (SNS) diagnostics system.

  18. A synthetic computational environment: To control the spread of respiratory infections in a virtual university

    Science.gov (United States)

    Ge, Yuanzheng; Chen, Bin; liu, Liang; Qiu, Xiaogang; Song, Hongbin; Wang, Yong

    2018-02-01

    Individual-based computational environment provides an effective solution to study complex social events by reconstructing scenarios. Challenges remain in reconstructing the virtual scenarios and reproducing the complex evolution. In this paper, we propose a framework to reconstruct a synthetic computational environment, reproduce the epidemic outbreak, and evaluate management interventions in a virtual university. The reconstructed computational environment includes 4 fundamental components: the synthetic population, behavior algorithms, multiple social networks, and geographic campus environment. In the virtual university, influenza H1N1 transmission experiments are conducted, and gradually enhanced interventions are evaluated and compared quantitatively. The experiment results indicate that the reconstructed virtual environment provides a solution to reproduce complex emergencies and evaluate policies to be executed in the real world.

  19. Computational intelligence synergies of fuzzy logic, neural networks and evolutionary computing

    CERN Document Server

    Siddique, Nazmul

    2013-01-01

    Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing presents an introduction to some of the cutting edge technological paradigms under the umbrella of computational intelligence. Computational intelligence schemes are investigated with the development of a suitable framework for fuzzy logic, neural networks and evolutionary computing, neuro-fuzzy systems, evolutionary-fuzzy systems and evolutionary neural systems. Applications to linear and non-linear systems are discussed with examples. Key features: Covers all the aspect

  20. Network Intelligence Based on Network State Information for Connected Vehicles Utilizing Fog Computing

    Directory of Open Access Journals (Sweden)

    Seongjin Park

    2017-01-01

    Full Text Available This paper proposes a method to take advantage of fog computing and SDN in the connected vehicle environment, where communication channels are unstable and the topology changes frequently. A controller knows the current state of the network by maintaining the most recent network topology. Of all the information collected by the controller in the mobile environment, node mobility information is particularly important. Thus, we divide nodes into three classes according to their mobility types and use their related attributes to efficiently manage the mobile connections. Our approach utilizes mobility information to reduce control message overhead by adjusting the period of beacon messages and to support efficient failure recovery. One is to recover the connection failures using only mobility information, and the other is to suggest a real-time scheduling algorithm to recover the services for the vehicles that lost connection in the case of a fog server failure. A real-time scheduling method is first described and then evaluated. The results show that our scheme is effective in the connected vehicle environment. We then demonstrate the reduction of control overhead and the connection recovery by using a network simulator. The simulation results show that control message overhead and failure recovery time are decreased by approximately 55% and 5%, respectively.

  1. Just-in-time multimedia distribution in a mobile computing environment

    OpenAIRE

    O'Grady, Michael J; O'Hare, G. M. P.

    2004-01-01

    Disseminating multimedia content to users in a mobile computing environment such that they receive it in an appropriate and timely manner is fundamental to the success of mobile information systems. Too often, however, this endeavour is hindered by the poor data rates supported by wireless telecommunications networks and by the limited computational resources available on mobile devices. We describe an approach to overcome these limitations, which is based on extremely dynamic and...

  2. Marine Corps Private Cloud Computing Environment Strategy

    Science.gov (United States)

    2012-05-15

    leveraging economies of scale through the MCEITS PCCE, the Marine Corps will measure consumed IT resources more effectively, increase or decrease...flexible broad network access, resource pooling, elastic provisioning and measured services. By leveraging economies of scale the Marine Corps will be able...IaaS SaaS / IaaS 1 1 LCE I ACE Dets I I I I ------------------~ GIG / CJ Internet Security Boundary MCEN I DISN r :------------------ MCEN

  3. Network Patch Cables Demystified: A Super Activity for Computer Networking Technology

    Science.gov (United States)

    Brown, Douglas L.

    2004-01-01

    This article de-mystifies network patch cable secrets so that people can connect their computers and transfer those pesky files--without screaming at the cables. It describes a network cabling activity that can offer students a great hands-on opportunity for working with the tools, techniques, and media used in computer networking. Since the…

  4. Wireless sensor networks in a maritime environment

    Science.gov (United States)

    Kavelaars, W.; Maris, M.

    2005-10-01

    In the recent years, there has been a lot of research on sensor networks and their applications. In particular for monitoring large and potentially hostile areas these networks have proven to be very useful. The technique of land-based sensor networks can be extrapolated to sensing buoys at sea or in harbors. This is a novel and intriguing addition to existing maritime monitoring systems. At TNO, much research effort has gone into developing sensor networks. In this paper, the TNOdes sensor network is presented. Its practical employability is demonstrated in a surveillance application deploying 50 nodes. The system is capable of tracking persons in a field, as would be the situation around a military compound. A camera-system is triggered by the sensors and zooms into the detected moving objects. It is described how this system could be modified to create a wireless buoys network. Typical applications of a wireless (and potentially mobile) buoy network are bay-area surveillance, mine detection, identification and ship protection.

  5. Throughput capacity computation model for hybrid wireless networks

    African Journals Online (AJOL)

    wireless networks. We present in this paper, a computational model for obtaining throughput capacity for hybrid wireless networks. For a hybrid network with n nodes and m base stations, we observe through simulation that the throughput capacity increases linearly with the base station infrastructure connected by the wired ...

  6. Novel Ethernet Based Optical Local Area Networks for Computer Interconnection

    NARCIS (Netherlands)

    Radovanovic, Igor; van Etten, Wim; Taniman, R.O.; Kleinkiskamp, Ronny

    2003-01-01

    In this paper we present new optical local area networks for fiber-to-the-desk application. Presented networks are expected to bring a solution for having optical fibers all the way to computers. To bring the overall implementation costs down we have based our networks on short-wavelength optical

  7. 4th International Conference on Computer Engineering and Networks

    CERN Document Server

    2015-01-01

    This book aims to examine innovation in the fields of computer engineering and networking. The book covers important emerging topics in computer engineering and networking, and it will help researchers and engineers improve their knowledge of state-of-art in related areas. The book presents papers from the 4th International Conference on Computer Engineering and Networks (CENet2014) held July 19-20, 2014 in Shanghai, China.  ·       Covers emerging topics for computer engineering and networking ·       Discusses how to improve productivity by using the latest advanced technologies ·       Examines innovation in the fields of computer engineering and networking  

  8. Main control computer security model of closed network systems protection against cyber attacks

    Science.gov (United States)

    Seymen, Bilal

    2014-06-01

    The model that brings the data input/output under control in closed network systems, that maintains the system securely, and that controls the flow of information through the Main Control Computer which also brings the network traffic under control against cyber-attacks. The network, which can be controlled single-handedly thanks to the system designed to enable the network users to make data entry into the system or to extract data from the system securely, intends to minimize the security gaps. Moreover, data input/output record can be kept by means of the user account assigned for each user, and it is also possible to carry out retroactive tracking, if requested. Because the measures that need to be taken for each computer on the network regarding cyber security, do require high cost; it has been intended to provide a cost-effective working environment with this model, only if the Main Control Computer has the updated hardware.

  9. Research computing in a distributed cloud environment

    Energy Technology Data Exchange (ETDEWEB)

    Fransham, K; Agarwal, A; Armstrong, P; Bishop, A; Charbonneau, A; Desmarais, R; Hill, N; Gable, I; Gaudet, S; Goliath, S; Impey, R; Leavett-Brown, C; Ouellete, J; Paterson, M; Pritchet, C; Penfold-Brown, D; Podaima, W; Schade, D; Sobie, R J, E-mail: fransham@uvic.ca

    2010-11-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  10. Research computing in a distributed cloud environment

    Science.gov (United States)

    Fransham, K.; Agarwal, A.; Armstrong, P.; Bishop, A.; Charbonneau, A.; Desmarais, R.; Hill, N.; Gable, I.; Gaudet, S.; Goliath, S.; Impey, R.; Leavett-Brown, C.; Ouellete, J.; Paterson, M.; Pritchet, C.; Penfold-Brown, D.; Podaima, W.; Schade, D.; Sobie, R. J.

    2010-11-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  11. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    Science.gov (United States)

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks

  12. Towards A Novel Environment For Simulation Of Quantum Computing

    Directory of Open Access Journals (Sweden)

    Joanna Patrzyk

    2015-01-01

    Full Text Available In this paper we analyze existing quantum computer simulation techniquesand their realizations to minimize the impact of the exponentialcomplexity of simulated quantum computations. As a result of thisinvestigation, we propose a quantum computer simulator with an integrateddevelopment environment - QuIDE - supporting development of algorithms forfuture quantum computers. The simulator simplifies building and testingquantum circuits and understand quantum algorithms in an efficient way.The development environment provides  flexibility of source codeedition and ease of graphical building of circuit diagrams.  We alsodescribe and analyze the complexity of algorithms used for simulationand present performance results of the simulator as well as results ofits deployment during university classes.

  13. Second International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Konar, Amit; Chakraborty, Aruna

    2014-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two-volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 148 scholarly papers, which have been accepted for presentation from over 640 submissions in the second International Conference on Advanced Computing, Networking and Informatics, 2014, held in Kolkata, India during June 24-26, 2014. The first volume includes innovative computing techniques and relevant research results in informatics with selective applications in pattern recognition, signal/image process...

  14. Constructing Neuronal Network Models in Massively Parallel Environments

    Directory of Open Access Journals (Sweden)

    Tammo Ippen

    2017-05-01

    Full Text Available Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  15. Constructing Neuronal Network Models in Massively Parallel Environments.

    Science.gov (United States)

    Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  16. WaveJava: Wavelet-based network computing

    Science.gov (United States)

    Ma, Kun; Jiao, Licheng; Shi, Zhuoer

    1997-04-01

    Wavelet is a powerful theory, but its successful application still needs suitable programming tools. Java is a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multi- threaded, dynamic language. This paper addresses the design and development of a cross-platform software environment for experimenting and applying wavelet theory. WaveJava, a wavelet class library designed by the object-orient programming, is developed to take advantage of the wavelets features, such as multi-resolution analysis and parallel processing in the networking computing. A new application architecture is designed for the net-wide distributed client-server environment. The data are transmitted with multi-resolution packets. At the distributed sites around the net, these data packets are done the matching or recognition processing in parallel. The results are fed back to determine the next operation. So, the more robust results can be arrived quickly. The WaveJava is easy to use and expand for special application. This paper gives a solution for the distributed fingerprint information processing system. It also fits for some other net-base multimedia information processing, such as network library, remote teaching and filmless picture archiving and communications.

  17. Teaching Network Security in a Virtual Learning Environment

    Science.gov (United States)

    Bergstrom, Laura; Grahn, Kaj J.; Karlstrom, Krister; Pulkkis, Goran; Astrom, Peik

    2004-01-01

    This article presents a virtual course with the topic network security. The course has been produced by Arcada Polytechnic as a part of the production team Computer Networks, Telecommunication and Telecommunication Systems in the Finnish Virtual Polytechnic. The article begins with an introduction to the evolution of the information security…

  18. Reliability of computer memories in radiation environment

    Directory of Open Access Journals (Sweden)

    Fetahović Irfan S.

    2016-01-01

    Full Text Available The aim of this paper is examining a radiation hardness of the magnetic (Toshiba MK4007 GAL and semiconductor (AT 27C010 EPROM and AT 28C010 EEPROM computer memories in the field of radiation. Magnetic memories have been examined in the field of neutron radiation, and semiconductor memories in the field of gamma radiation. The obtained results have shown a high radiation hardness of magnetic memories. On the other side, it has been shown that semiconductor memories are significantly more sensitive and a radiation can lead to an important damage of their functionality. [Projekat Ministarstva nauke Republike Srbije, br. 171007

  19. Ubiquitous Computing in Physico-Spatial Environments

    DEFF Research Database (Denmark)

    Dalsgård, Peter; Eriksson, Eva

    2007-01-01

    Interaction design of pervasive and ubiquitous computing (UC) systems must take into account physico-spatial issues as technology is implemented into our physical surroundings. In this paper we discuss how one conceptual framework for understanding interaction in context, Activity Theory (AT......), frames the role of space. We point to the fact that AT treats space primarily in terms of analyzing the role of space before designing IT-systems and evaluating spatial effects of IT-systems in use contexts after the design phase. We consequently identify a gap in that role of space is not recognized...

  20. Smart Sensor Network System For Environment Monitoring

    Directory of Open Access Journals (Sweden)

    Javed Ali Baloch

    2012-07-01

    Full Text Available SSN (Smart Sensor Network systems could be used to monitor buildings with modern infrastructure, plant sites with chemical pollution, horticulture, natural habitat, wastewater management and modern transport system. To sense attributes of phenomena and make decisions on the basis of the sensed value is the primary goal of such systems. In this paper a Smart Spatially aware sensor system is presented. A smart system, which could continuously monitor the network to observe the functionality and trigger, alerts to the base station if a change in the system occurs and provide feedback periodically, on demand or even continuously depending on the nature of the application. The results of the simulation trials presented in this paper exhibit the performance of a Smart Spatially Aware Sensor Networks.

  1. Center for Advanced Energy Studies: Computer Assisted Virtual Environment (CAVE)

    Data.gov (United States)

    Federal Laboratory Consortium — The laboratory contains a four-walled 3D computer assisted virtual environment - or CAVE TM — that allows scientists and engineers to literally walk into their data...

  2. Computational Tool for Aerothermal Environment Around Transatmospheric Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of this Project is to develop a high-fidelity computational tool for accurate prediction of aerothermal environment on transatmospheric vehicles. This...

  3. Realistic Modeling of Wireless Network Environments

    Science.gov (United States)

    2015-03-01

    processor ( DSP ) ....................................................................................................... 7 3.3.2 Signal conversion module...Approved for Public Release; Distribution Unlimited. 6 3.3.1 Digital signal processor ( DSP ) The DSP card (Figure 3) provides centralized computation...5 Figure 3: DSP Card

  4. Network selection, Information filtering and Scalable computation

    Science.gov (United States)

    Ye, Changqing

    This dissertation explores two application scenarios of sparsity pursuit method on large scale data sets. The first scenario is classification and regression in analyzing high dimensional structured data, where predictors corresponds to nodes of a given directed graph. This arises in, for instance, identification of disease genes for the Parkinson's diseases from a network of candidate genes. In such a situation, directed graph describes dependencies among the genes, where direction of edges represent certain causal effects. Key to high-dimensional structured classification and regression is how to utilize dependencies among predictors as specified by directions of the graph. In this dissertation, we develop a novel method that fully takes into account such dependencies formulated through certain nonlinear constraints. We apply the proposed method to two applications, feature selection in large margin binary classification and in linear regression. We implement the proposed method through difference convex programming for the cost function and constraints. Finally, theoretical and numerical analyses suggest that the proposed method achieves the desired objectives. An application to disease gene identification is presented. The second application scenario is personalized information filtering which extracts the information specifically relevant to a user, predicting his/her preference over a large number of items, based on the opinions of users who think alike or its content. This problem is cast into the framework of regression and classification, where we introduce novel partial latent models to integrate additional user-specific and content-specific predictors, for higher predictive accuracy. In particular, we factorize a user-over-item preference matrix into a product of two matrices, each representing a user's preference and an item preference by users. Then we propose a likelihood method to seek a sparsest latent factorization, from a class of over

  5. Computational Genetic Regulatory Networks Evolvable, Self-organizing Systems

    CERN Document Server

    Knabe, Johannes F

    2013-01-01

    Genetic Regulatory Networks (GRNs) in biological organisms are primary engines for cells to enact their engagements with environments, via incessant, continually active coupling. In differentiated multicellular organisms, tremendous complexity has arisen in the course of evolution of life on earth. Engineering and science have so far achieved no working system that can compare with this complexity, depth and scope of organization. Abstracting the dynamics of genetic regulatory control to a computational framework in which artificial GRNs in artificial simulated cells differentiate while connected in a changing topology, it is possible to apply Darwinian evolution in silico to study the capacity of such developmental/differentiated GRNs to evolve. In this volume an evolutionary GRN paradigm is investigated for its evolvability and robustness in models of biological clocks, in simple differentiated multicellularity, and in evolving artificial developing 'organisms' which grow and express an ontogeny starting fr...

  6. Home-Network Security Model in Ubiquitous Environment

    OpenAIRE

    Dong-Young Yoo; Jong-Whoi Shin; Jin-Young Choi

    2007-01-01

    Social interest and demand on Home-Network has been increasing greatly. Although various services are being introduced to respond to such demands, they can cause serious security problems when linked to the open network such as Internet. This paper reviews the security requirements to protect the service users with assumption that the Home-Network environment is connected to Internet and then proposes the security model based on the requirement. The proposed security mode...

  7. Managing the Environment : Effects of network ambition on agency performance

    NARCIS (Netherlands)

    Akkerman, A.; Torenvlied, R.

    2011-01-01

    The literature on network management in the public sector reports positive effects of network activity on agency performance. Current studies show however no differences between specific types of contacts in an agency's environment. The present article adopts an explorative design to study the

  8. Mobile Sensor Networks for Inspection Tasks in Harsh Industrial Environments

    NARCIS (Netherlands)

    Mulder, Jacob; Wang, Xinyu; Ferwerda, Franke; Cao, Ming

    Recent advances in sensor technology have enabled the fast development of mobile sensor networks operating in various unknown and sometimes hazardous environments. In this paper, we introduce one integrative approach to design, analyze and test distributed control algorithms to coordinate a network

  9. An overview of computer viruses in a research environment

    Science.gov (United States)

    Bishop, Matt

    1991-01-01

    The threat of attack by computer viruses is in reality a very small part of a much more general threat, specifically threats aimed at subverting computer security. Here, computer viruses are examined as a malicious logic in a research and development environment. A relation is drawn between the viruses and various models of security and integrity. Current research techniques aimed at controlling the threats posed to computer systems by threatening viruses in particular and malicious logic in general are examined. Finally, a brief examination of the vulnerabilities of research and development systems that malicious logic and computer viruses may exploit is undertaken.

  10. Crowdsourcing the nodulation gene network discovery environment.

    Science.gov (United States)

    Li, Yupeng; Jackson, Scott A

    2016-05-26

    The Legumes (Fabaceae) are an economically and ecologically important group of plant species with the conspicuous capacity for symbiotic nitrogen fixation in root nodules, specialized plant organs containing symbiotic microbes. With the aim of understanding the underlying molecular mechanisms leading to nodulation, many efforts are underway to identify nodulation-related genes and determine how these genes interact with each other. In order to accurately and efficiently reconstruct nodulation gene network, a crowdsourcing platform, CrowdNodNet, was created. The platform implements the jQuery and vis.js JavaScript libraries, so that users are able to interactively visualize and edit the gene network, and easily access the information about the network, e.g. gene lists, gene interactions and gene functional annotations. In addition, all the gene information is written on MediaWiki pages, enabling users to edit and contribute to the network curation. Utilizing the continuously updated, collaboratively written, and community-reviewed Wikipedia model, the platform could, in a short time, become a comprehensive knowledge base of nodulation-related pathways. The platform could also be used for other biological processes, and thus has great potential for integrating and advancing our understanding of the functional genomics and systems biology of any process for any species. The platform is available at http://crowd.bioops.info/ , and the source code can be openly accessed at https://github.com/bioops/crowdnodnet under MIT License.

  11. 3rd International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Chaki, Nabendu

    2016-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 132 scholarly articles, which have been accepted for presentation from over 550 submissions in the Third International Conference on Advanced Computing, Networking and Informatics, 2015, held in Bhubaneswar, India during June 23–25, 2015.

  12. Modeling and performance analysis for composite network–compute service provisioning in software-defined cloud environments

    Directory of Open Access Journals (Sweden)

    Qiang Duan

    2015-08-01

    Full Text Available The crucial role of networking in Cloud computing calls for a holistic vision of both networking and computing systems that leads to composite network–compute service provisioning. Software-Defined Network (SDN is a fundamental advancement in networking that enables network programmability. SDN and software-defined compute/storage systems form a Software-Defined Cloud Environment (SDCE that may greatly facilitate composite network–compute service provisioning to Cloud users. Therefore, networking and computing systems need to be modeled and analyzed as composite service provisioning systems in order to obtain thorough understanding about service performance in SDCEs. In this paper, a novel approach for modeling composite network–compute service capabilities and a technique for evaluating composite network–compute service performance are developed. The analytic method proposed in this paper is general and agnostic to service implementation technologies; thus is applicable to a wide variety of network–compute services in SDCEs. The results obtained in this paper provide useful guidelines for federated control and management of networking and computing resources to achieve Cloud service performance guarantees.

  13. Purple Computational Environment With Mappings to ACE Requirements for the General Availability User Environment Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Barney, B; Shuler, J

    2006-08-21

    Purple is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Lawrence Livermore National Laboratory (LLNL). The Purple Computational Environment documents the capabilities and the environment provided for the FY06 LLNL Level 1 General Availability Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Sandia National Laboratories, but also documents needs of the LLNL and Alliance users working in the unclassified environment. Additionally, the Purple Computational Environment maps the provided capabilities to the Trilab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the General Availability user environment capabilities of the ASC community. Appendix A lists these requirements and includes a description of ACE requirements met and those requirements that are not met for each section of this document. The Purple Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the Tri-lab community.

  14. Dynamics of Bottlebrush Networks: A Computational Study

    Science.gov (United States)

    Dobrynin, Andrey; Cao, Zhen; Sheiko, Sergei

    We study dynamics of deformation of bottlebrush networks using molecular dynamics simulations and theoretical calculations. Analysis of our simulation results show that the dynamics of bottlebrush network deformation can be described by a Rouse model for polydisperse networks with effective Rouse time of the bottlebrush network strand, τR =τ0Ns2 (Nsc + 1) where, Ns is the number-average degree of polymerization of the bottlebrush backbone strands between crosslinks, Nsc is the degree of polymerization of the side chains and τ0is a characteristic monomeric relaxation time. At time scales t smaller than the Rouse time, t crosslinks, the network response is pure elastic with shear modulus G (t) =G0 , where G0 is the equilibrium shear modulus at small deformation. The stress evolution in the bottlebrush networks can be described by a universal function of t /τR . NSF DMR-1409710.

  15. Risk, Privacy, and Security in Computer Networks

    OpenAIRE

    Årnes, Andre

    2006-01-01

    With an increasingly digitally connected society comes complexity, uncertainty, and risk. Network monitoring, incident management, and digital forensics is of increasing importance with the escalation of cybercrime and other network supported serious crimes. New laws and regulations governing electronic communications, cybercrime, and data retention are being proposed, continuously requiring new methods and tools. This thesis introduces a novel approach to real-time network risk assessmen...

  16. SOCIAL NETWORKS AS THE ENVIRONMENT EDUCATION

    Directory of Open Access Journals (Sweden)

    Wojsław Czupryński

    2016-12-01

    Full Text Available The emergence of the global Internet has changed the way the entire human population communicates. The internet has become a platform, where human societies build their lives, and traditional communication over the last few years has been replaced by social networks. Today, social networks are the subject of many debates concerning their advantages, disadvantages and the ideas of what they bring to the future. Portals are not only the way of communication, fun, an idea to spend free time, but also source of social and humanistic knowledge too. Against that, social media could be a huge risk for those who use them. The assigned job above discusses about the topic the detrimental effect what the social networks bring. A series of deviant behaviors caused by use of the portal is also presented in this report. Often they become a dysfunctional generator of actions that manifest themselves among the youth. Consequently, there was a need to take action to stop the growth of this phenomenon among young people. First of all the primary activities at this level are prevention and education in the family.

  17. Intrusion Detection System Inside Grid Computing Environment (IDS-IGCE)

    OpenAIRE

    Basappa B. Kodada; Ramesh Nayak; Raghavendra Prabhu; Suresha D

    2012-01-01

    Grid Computing is a kind of important information technology which enables resource sharing globally to solve the large scale problem. It is based on networks and able to enable large scale aggregation and sharing of computational, data, sensors and other resources across institutional boundaries. Integrated Globus Tool Kit with Web services is to present OGSA (Open Grid Services Architecture) as the standardservice grid architecture. In OGSA, everything is abstracted as a service, including ...

  18. Towards molecular computers that operate in a biological environment

    Science.gov (United States)

    Kahan, Maya; Gil, Binyamin; Adar, Rivka; Shapiro, Ehud

    2008-07-01

    Even though electronic computers are the only computer species we are accustomed to, the mathematical notion of a programmable computer has nothing to do with electronics. In fact, Alan Turing’s notional computer [L.M. Turing, On computable numbers, with an application to the entcheidungsproblem, Proc. Lond. Math. Soc. 42 (1936) 230-265], which marked in 1936 the birth of modern computer science and still stands at its heart, has greater similarity to natural biomolecular machines such as the ribosome and polymerases than to electronic computers. This similarity led to the investigation of DNA-based computers [C.H. Bennett, The thermodynamics of computation - Review, Int. J. Theoret. Phys. 21 (1982) 905-940; A.M. Adleman, Molecular computation of solutions to combinatorial problems, Science 266 (1994) 1021-1024]. Although parallelism, sequence specific hybridization and storage capacity, inherent to DNA and RNA molecules, can be exploited in molecular computers to solve complex mathematical problems [Q. Ouyang, et al., DNA solution of the maximal clique problem, Science 278 (1997) 446-449; R.J. Lipton, DNA solution of hard computational problems, Science 268 (1995) 542-545; R.S. Braich, et al., Solution of a 20-variable 3-SAT problem on a DNA computer, Science 296 (2002) 499-502; Liu Q., et al., DNA computing on surfaces, Nature 403 (2000) 175-179; D. Faulhammer, et al., Molecular computation: RNA solutions to chess problems, Proc. Natl. Acad. Sci. USA 97 (2000) 1385-1389; C. Mao, et al., Logical computation using algorithmic self-assembly of DNA triple-crossover molecules, Nature 407 (2000) 493-496; A.J. Ruben, et al., The past, present and future of molecular computing, Nat. Rev. Mol. Cell. Biol. 1 (2000) 69-72], we believe that the more significant potential of molecular computers lies in their ability to interact directly with a biochemical environment such as the bloodstream and living cells. From this perspective, even simple molecular computations may have

  19. Defamation Charges in a Networked Environment.

    Science.gov (United States)

    Ferencz, Susan K.

    1997-01-01

    Considers how civil law might treat claims of defamation arising from computer newsgroup postings. Concludes that newsgroup postings will probably be treated as a hybrid of print and broadcast media, and that newsgroup users will vigorously and aggressively protect freedoms of speech and press. While traditional defenses to defamation charges will…

  20. Computing properties of stable configurations of thermodynamic binding networks

    OpenAIRE

    Breik, Keenan; Prakash, Lakshmi; Thachuk, Chris; Heule, Marijn; Soloveichik, David

    2017-01-01

    Models of molecular computing generally embed computation in kinetics--the specific time evolution of a chemical system. However, if the desired output is not thermodynamically stable, basic physical chemistry dictates that thermodynamic forces will drive the system toward error throughout the computation. The Thermodynamic Binding Network (TBN) model was introduced to formally study how the thermodynamic equilibrium can be made consistent with the desired computation, and it idealizes bindin...

  1. Artificial Neural Network Metamodels of Stochastic Computer Simulations

    Science.gov (United States)

    1994-08-10

    23 Haddock, J. and O’Keefe, R., "Using Artificial Intelligence to Facilitate Manufacturing Systems Simulation," Computers & Industrial Engineering , Vol...Feedforward Neural Networks," Computers & Industrial Engineering , Vol. 21, No. 1- 4, (1991), pp. 247-251. 87 Proceedings of the 1992 Summer Computer...Using Simulation Experiments," Computers & Industrial Engineering , Vol. 22, No. 2 (1992), pp. 195-209. 119 Kuei, C. and Madu, C., "Polynomial

  2. Wireless Networks: New Meaning to Ubiquitous Computing.

    Science.gov (United States)

    Drew, Wilfred, Jr.

    2003-01-01

    Discusses the use of wireless technology in academic libraries. Topics include wireless networks; standards (IEEE 802.11); wired versus wireless; why libraries implement wireless technology; wireless local area networks (WLANs); WLAN security; examples of wireless use at Indiana State University and Morrisville College (New York); and useful…

  3. Uncoupled Analysis of Stochastic Reaction Networks in Fluctuating Environments

    Science.gov (United States)

    Zechner, Christoph; Koeppl, Heinz

    2014-01-01

    The dynamics of stochastic reaction networks within cells are inevitably modulated by factors considered extrinsic to the network such as, for instance, the fluctuations in ribosome copy numbers for a gene regulatory network. While several recent studies demonstrate the importance of accounting for such extrinsic components, the resulting models are typically hard to analyze. In this work we develop a general mathematical framework that allows to uncouple the network from its dynamic environment by incorporating only the environment's effect onto the network into a new model. More technically, we show how such fluctuating extrinsic components (e.g., chemical species) can be marginalized in order to obtain this decoupled model. We derive its corresponding process- and master equations and show how stochastic simulations can be performed. Using several case studies, we demonstrate the significance of the approach. PMID:25473849

  4. CFD Optimization on Network-Based Parallel Computer System

    Science.gov (United States)

    Cheung, Samson H.; VanDalsem, William (Technical Monitor)

    1994-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advance computational fluid dynamics codes, which is computationally expensive in mainframe supercomputer. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computer on a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package has been applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  5. Phoebus: Network Middleware for Next-Generation Network Computing

    Energy Technology Data Exchange (ETDEWEB)

    Martin Swany

    2012-06-16

    The Phoebus project investigated algorithms, protocols, and middleware infrastructure to improve end-to-end performance in high speed, dynamic networks. The Phoebus system essentially serves as an adaptation point for networks with disparate capabilities or provisioning. This adaptation can take a variety of forms including acting as a provisioning agent across multiple signaling domains, providing transport protocol adaptation points, and mapping between distributed resource reservation paradigms and the optical network control plane. We have successfully developed the system and demonstrated benefits. The Phoebus system was deployed in Internet2 and in ESnet, as well as in GEANT2, RNP in Brazil and over international links to Korea and Japan. Phoebus is a system that implements a new protocol and associated forwarding infrastructure for improving throughput in high-speed dynamic networks. It was developed to serve the needs of large DOE applications on high-performance networks. The idea underlying the Phoebus model is to embed Phoebus Gateways (PGs) in the network as on-ramps to dynamic circuit networks. The gateways act as protocol translators that allow legacy applications to use dedicated paths with high performance.

  6. Collaborative Service Selection via Ensemble Learning in Mixed Mobile Network Environments

    Directory of Open Access Journals (Sweden)

    Yuyu Yin

    2017-07-01

    Full Text Available Mobile Service selection is an important but challenging problem in service and mobile computing. Quality of service (QoS predication is a critical step in service selection in 5G network environments. The traditional methods, such as collaborative filtering (CF, suffer from a series of defects, such as failing to handle data sparsity. In mobile network environments, the abnormal QoS data are likely to result in inferior prediction accuracy. Unfortunately, these problems have not attracted enough attention, especially in a mixed mobile network environment with different network configurations, generations, or types. An ensemble learning method for predicting missing QoS in 5G network environments is proposed in this paper. There are two key principles: one is the newly proposed similarity computation method for identifying similar neighbors; the other is the extended ensemble learning model for discovering and filtering fake neighbors from the preliminary neighbors set. Moreover, three prediction models are also proposed, two individual models and one combination model. They are used for utilizing the user similar neighbors and servicing similar neighbors, respectively. Experimental results conducted in two real-world datasets show our approaches can produce superior prediction accuracy.

  7. Impact of indoor environment on path loss in body area networks.

    Science.gov (United States)

    Hausman, Sławomir; Januszkiewicz, Łukasz

    2014-10-20

    In this paper the influence of an example indoor environment on narrowband radio channel path loss for body area networks operating around 2.4 GHz is investigated using computer simulations and on-site measurements. In contrast to other similar studies, the simulation model included both a numerical human body phantom and its environment-room walls, floor and ceiling. As an example, radio signal attenuation between two different configurations of transceivers with dipole antennas placed in a direct vicinity of a human body (on-body scenario) is analyzed by computer simulations for several types of reflecting environments. In the analyzed case the propagation environments comprised a human body and office room walls. As a reference environment for comparison, free space with only a conducting ground plane, modelling a steel mesh reinforced concrete floor, was chosen. The transmitting and receiving antennas were placed in two on-body configurations chest-back and chest-arm. Path loss vs. frequency simulation results obtained using Finite Difference Time Domain (FDTD) method and a multi-tissue anthropomorphic phantom were compared to results of measurements taken with a vector network analyzer with a human subject located in an average-size empty cuboidal office room. A comparison of path loss values in different environments variants gives some qualitative and quantitative insight into the adequacy of simplified indoor environment model for the indoor body area network channel representation.

  8. Networked Mobilities and Performative Urban Environments

    DEFF Research Database (Denmark)

    Jensen, Ole B.

    Physical mobility has an important cultural dimension to contemporary life. The movement of objects, signs, and people constitutes material sites of networked relationships. However, as an increasing number of mobility practices are making up our everyday life experiences the movement is much mor...... a field of explorations into broader issues of democracy, multiple publics, and new mobile (electronic and material) agoras pointing towards a critical re-interpretation of contemporary politics of space and mobility.......Physical mobility has an important cultural dimension to contemporary life. The movement of objects, signs, and people constitutes material sites of networked relationships. However, as an increasing number of mobility practices are making up our everyday life experiences the movement is much more...... than a travel from point A to point B. The mobile experiences of the contemporary society are practices that are meaningful and normatively embedded. That is to say, mobility is seen as a cultural phenomenon shaping notions of self and other as well as the relationship to sites and places. Furthermore...

  9. Portable Operating Systems for Network Computers: Distributed Operating Systems Support for Group Communications.

    Science.gov (United States)

    1985-10-31

    Modula-2 MICROS/ MICRONET " (5) William S. Holmes, May 1983, "Version and Source Code Support Environment" (6) Michael J. Palumbo. May 1983, "Stand...Dist. Comp. Sys., Denver, CO, May 1985, 386- 393. 7. A. van Tilborg and L. D. Wittie, "Operating Systems for the Micronet Network Computer", IEEE Micro

  10. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  11. A network-oriented business modeling environment

    Science.gov (United States)

    Bisconti, Cristian; Storelli, Davide; Totaro, Salvatore; Arigliano, Francesco; Savarino, Vincenzo; Vicari, Claudia

    The development of formal models related to the organizational aspects of an enterprise is fundamental when these aspects must be re-engineered and digitalized, especially when the enterprise is involved in the dynamics and value flows of a business network. Business modeling provides an opportunity to synthesize and make business processes, business rules and the structural aspects of an organization explicit, allowing business managers to control their complexity and guide an enterprise through effective decisional and strategic activities. This chapter discusses the main results of the TEKNE project in terms of software components that enable enterprises to configure, store, search and share models of any aspects of their business while leveraging standard and business-oriented technologies and languages to bridge the gap between the world of business people and IT experts and to foster effective business-to-business collaborations.

  12. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  13. Networks and Project Work: Alternative Pedagogies for Writing with Computers.

    Science.gov (United States)

    Susser, Bernard

    1993-01-01

    Describes three main uses of computers for writing as a social activity: networking, telecommunications, and project work. Examines advantages and disadvantages of teaching writing on a network. Argues that reports in the literature and the example of an English as a foreign language writing class show that project work shares most of the…

  14. Computer Networking Strategies for Building Collaboration among Science Educators.

    Science.gov (United States)

    Aust, Ronald

    The development and dissemination of science materials can be associated with technical delivery systems such as the Unified Network for Informatics in Teacher Education (UNITE). The UNITE project was designed to investigate ways for using computer networking to improve communications and collaboration among university schools of education and…

  15. Artificial neural networks modeling gene-environment interaction

    Directory of Open Access Journals (Sweden)

    Günther Frauke

    2012-05-01

    Full Text Available Abstract Background Gene-environment interactions play an important role in the etiological pathway of complex diseases. An appropriate statistical method for handling a wide variety of complex situations involving interactions between variables is still lacking, especially when continuous variables are involved. The aim of this paper is to explore the ability of neural networks to model different structures of gene-environment interactions. A simulation study is set up to compare neural networks with standard logistic regression models. Eight different structures of gene-environment interactions are investigated. These structures are characterized by penetrance functions that are based on sigmoid functions or on combinations of linear and non-linear effects of a continuous environmental factor and a genetic factor with main effect or with a masking effect only. Results In our simulation study, neural networks are more successful in modeling gene-environment interactions than logistic regression models. This outperfomance is especially pronounced when modeling sigmoid penetrance functions, when distinguishing between linear and nonlinear components, and when modeling masking effects of the genetic factor. Conclusion Our study shows that neural networks are a promising approach for analyzing gene-environment interactions. Especially, if no prior knowledge of the correct nature of the relationship between co-variables and response variable is present, neural networks provide a valuable alternative to regression methods that are limited to the analysis of linearly separable data.

  16. State of the Art of Network Security Perspectives in Cloud Computing

    Science.gov (United States)

    Oh, Tae Hwan; Lim, Shinyoung; Choi, Young B.; Park, Kwang-Roh; Lee, Heejo; Choi, Hyunsang

    Cloud computing is now regarded as one of social phenomenon that satisfy customers' needs. It is possible that the customers' needs and the primary principle of economy - gain maximum benefits from minimum investment - reflects realization of cloud computing. We are living in the connected society with flood of information and without connected computers to the Internet, our activities and work of daily living will be impossible. Cloud computing is able to provide customers with custom-tailored features of application software and user's environment based on the customer's needs by adopting on-demand outsourcing of computing resources through the Internet. It also provides cloud computing users with high-end computing power and expensive application software package, and accordingly the users will access their data and the application software where they are located at the remote system. As the cloud computing system is connected to the Internet, network security issues of cloud computing are considered as mandatory prior to real world service. In this paper, survey and issues on the network security in cloud computing are discussed from the perspective of real world service environments.

  17. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  18. Neuromorphic computing applications for network intrusion detection systems

    Science.gov (United States)

    Garcia, Raymond C.; Pino, Robinson E.

    2014-05-01

    What is presented here is a sequence of evolving concepts for network intrusion detection. These concepts start with neuromorphic structures for XOR-based signature matching and conclude with computationally based network intrusion detection system with an autonomous structuring algorithm. There is evidence that neuromorphic computation for network intrusion detection is fractal in nature under certain conditions. Specifically, the neural structure can take fractal form when simple neural structuring is autonomous. A neural structure is fractal by definition when its fractal dimension exceeds the synaptic matrix dimension. The authors introduce the use of fractal dimension of the neuromorphic structure as a factor in the autonomous restructuring feedback loop.

  19. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    2013-01-01

    that are shown empirically to be scalable. Given a road network, a set of existing facilities, and a collection of customer route traversals, an optimal segment query returns the optimal road network segment(s) for a new facility. We propose a practical framework for computing this query, where each route...... that adopt different approaches to computing the query. Algorithm AUG uses graph augmentation, and ITE uses iterative road-network partitioning. Empirical studies with real data sets demonstrate that the algorithms are capable of offering high performance in realistic settings....

  20. Impact of Indoor Environment on Path Loss in Body Area Networks

    Science.gov (United States)

    Hausman, Sławomir; Januszkiewicz, Łukasz

    2014-01-01

    In this paper the influence of an example indoor environment on narrowband radio channel path loss for body area networks operating around 2.4 GHz is investigated using computer simulations and on-site measurements. In contrast to other similar studies, the simulation model included both a numerical human body phantom and its environment—room walls, floor and ceiling. As an example, radio signal attenuation between two different configurations of transceivers with dipole antennas placed in a direct vicinity of a human body (on-body scenario) is analyzed by computer simulations for several types of reflecting environments. In the analyzed case the propagation environments comprised a human body and office room walls. As a reference environment for comparison, free space with only a conducting ground plane, modelling a steel mesh reinforced concrete floor, was chosen. The transmitting and receiving antennas were placed in two on-body configurations chest–back and chest–arm. Path loss vs. frequency simulation results obtained using Finite Difference Time Domain (FDTD) method and a multi-tissue anthropomorphic phantom were compared to results of measurements taken with a vector network analyzer with a human subject located in an average-size empty cuboidal office room. A comparison of path loss values in different environments variants gives some qualitative and quantitative insight into the adequacy of simplified indoor environment model for the indoor body area network channel representation. PMID:25333289

  1. Impact of Indoor Environment on Path Loss in Body Area Networks

    Directory of Open Access Journals (Sweden)

    Sławomir Hausman

    2014-10-01

    Full Text Available In this paper the influence of an example indoor environment on narrowband radio channel path loss for body area networks operating around 2.4 GHz is investigated using computer simulations and on-site measurements. In contrast to other similar studies, the simulation model included both a numerical human body phantom and its environment—room walls, floor and ceiling. As an example, radio signal attenuation between two different configurations of transceivers with dipole antennas placed in a direct vicinity of a human body (on-body scenario is analyzed by computer simulations for several types of reflecting environments. In the analyzed case the propagation environments comprised a human body and office room walls. As a reference environment for comparison, free space with only a conducting ground plane, modelling a steel mesh reinforced concrete floor, was chosen. The transmitting and receiving antennas were placed in two on-body configurations chest–back and chest–arm. Path loss vs. frequency simulation results obtained using Finite Difference Time Domain (FDTD method and a multi-tissue anthropomorphic phantom were compared to results of measurements taken with a vector network analyzer with a human subject located in an average-size empty cuboidal office room. A comparison of path loss values in different environments variants gives some qualitative and quantitative insight into the adequacy of simplified indoor environment model for the indoor body area network channel representation.

  2. Classification and Analysis of Computer Network Traffic

    DEFF Research Database (Denmark)

    Bujlow, Tomasz

    2014-01-01

    for traffic classification, which can be used for nearly real-time processing of big amounts of data using affordable CPU and memory resources. Other questions are related to methods for real-time estimation of the application Quality of Service (QoS) level based on the results obtained by the traffic......Traffic monitoring and analysis can be done for multiple different reasons: to investigate the usage of network resources, assess the performance of network applications, adjust Quality of Service (QoS) policies in the network, log the traffic to comply with the law, or create realistic models...... classifier. This thesis is focused on topics connected with traffic classification and analysis, while the work on methods for QoS assessment is limited to defining the connections with the traffic classification and proposing a general algorithm. We introduced the already known methods for traffic...

  3. Active system area networks for data intensive computations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-04-01

    The goal of the Active System Area Networks (ASAN) project is to develop hardware and software technologies for the implementation of active system area networks (ASANs). The use of the term ''active'' refers to the ability of the network interfaces to perform application-specific as well as system level computations in addition to their traditional role of data transfer. This project adopts the view that the network infrastructure should be an active computational entity capable of supporting certain classes of computations that would otherwise be performed on the host CPUs. The result is a unique network-wide programming model where computations are dynamically placed within the host CPUs or the NIs depending upon the quality of service demands and network/CPU resource availability. The projects seeks to demonstrate that such an approach is a better match for data intensive network-based applications and that the advent of low-cost powerful embedded processors and configurable hardware makes such an approach economically viable and desirable.

  4. Console Networks for Major Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ophir, D; Shepherd, B; Spinrad, R J; Stonehill, D

    1966-07-22

    A concept for interactive time-sharing of a major computer system is developed in which satellite computers mediate between the central computing complex and the various individual user terminals. These techniques allow the development of a satellite system substantially independent of the details of the central computer and its operating system. Although the user terminals' roles may be rich and varied, the demands on the central facility are merely those of a tape drive or similar batched information transfer device. The particular system under development provides service for eleven visual display and communication consoles, sixteen general purpose, low rate data sources, and up to thirty-one typewriters. Each visual display provides a flicker-free image of up to 4000 alphanumeric characters or tens of thousands of points by employing a swept raster picture generating technique directly compatible with that of commercial television. Users communicate either by typewriter or a manually positioned light pointer.

  5. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  6. Realistic computer network simulation for network intrusion detection dataset generation

    Science.gov (United States)

    Payer, Garrett

    2015-05-01

    The KDD-99 Cup dataset is dead. While it can continue to be used as a toy example, the age of this dataset makes it all but useless for intrusion detection research and data mining. Many of the attacks used within the dataset are obsolete and do not reflect the features important for intrusion detection in today's networks. Creating a new dataset encompassing a large cross section of the attacks found on the Internet today could be useful, but would eventually fall to the same problem as the KDD-99 Cup; its usefulness would diminish after a period of time. To continue research into intrusion detection, the generation of new datasets needs to be as dynamic and as quick as the attacker. Simply examining existing network traffic and using domain experts such as intrusion analysts to label traffic is inefficient, expensive, and not scalable. The only viable methodology is simulation using technologies including virtualization, attack-toolsets such as Metasploit and Armitage, and sophisticated emulation of threat and user behavior. Simulating actual user behavior and network intrusion events dynamically not only allows researchers to vary scenarios quickly, but enables online testing of intrusion detection mechanisms by interacting with data as it is generated. As new threat behaviors are identified, they can be added to the simulation to make quicker determinations as to the effectiveness of existing and ongoing network intrusion technology, methodology and models.

  7. 1st International Conference on Signal, Networks, Computing, and Systems

    CERN Document Server

    Mohapatra, Durga; Nagar, Atulya; Sahoo, Manmath

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on Signal, Networks, Computing, and Systems (ICSNCS 2016) held at Jawaharlal Nehru University, New Delhi, India during February 25–27, 2016. The book is organized in to two volumes and primarily focuses on theory and applications in the broad areas of communication technology, computer science and information security. The book aims to bring together the latest scientific research works of academic scientists, professors, research scholars and students in the areas of signal, networks, computing and systems detailing the practical challenges encountered and the solutions adopted.

  8. Predicting user movements in heterogeneous indoor environments by reservoir computing

    OpenAIRE

    Bacciu, Davide; Barsocchi, Paolo; Chessa, Stefano; Gallicchio, Claudio; Micheli, Alessio

    2011-01-01

    Anticipating user localization by making accurate predictions on its indoor movement patterns is a fundamental challenge for achieving higher degrees of personalization and reactivity in smart-home environments. We propose an approach to real-time movement forecasting founding on the efficient Reservoir Computing paradigm, predicting user movements based on streams of Received Signal Strengths collected by wireless motes distributed in the home environment. The ability of the system to genera...

  9. Dynamical Systems Theory for Transparent Symbolic Computation in Neuronal Networks

    OpenAIRE

    Carmantini, Giovanni Sirio

    2017-01-01

    In this thesis, we explore the interface between symbolic and dynamical system computation, with particular regard to dynamical system models of neuronal networks. In doing so, we adhere to a definition of computation as the physical realization of a formal system, where we say that a dynamical system performs a computation if a correspondence can be found between its dynamics on a vectorial space and the formal system’s dynamics on a symbolic space. Guided by this definition, we characterize...

  10. CX: A Scalable, Robust Network for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Peter Cappello

    2002-01-01

    Full Text Available CX, a network-based computational exchange, is presented. The system's design integrates variations of ideas from other researchers, such as work stealing, non-blocking tasks, eager scheduling, and space-based coordination. The object-oriented API is simple, compact, and cleanly separates application logic from the logic that supports interprocess communication and fault tolerance. Computations, of course, run to completion in the presence of computational hosts that join and leave the ongoing computation. Such hosts, or producers, use task caching and prefetching to overlap computation with interprocessor communication. To break a potential task server bottleneck, a network of task servers is presented. Even though task servers are envisioned as reliable, the self-organizing, scalable network of n- servers, described as a sibling-connected height-balanced fat tree, tolerates a sequence of n-1 server failures. Tasks are distributed throughout the server network via a simple "diffusion" process. CX is intended as a test bed for research on automated silent auctions, reputation services, authentication services, and bonding services. CX also provides a test bed for algorithm research into network-based parallel computation.

  11. Signaling networks: information flow, computation, and decision making.

    Science.gov (United States)

    Azeloglu, Evren U; Iyengar, Ravi

    2015-04-01

    Signaling pathways come together to form networks that connect receptors to many different cellular machines. Such networks not only receive and transmit signals but also process information. The complexity of these networks requires the use of computational models to understand how information is processed and how input-output relationships are determined. Two major computational approaches used to study signaling networks are graph theory and dynamical modeling. Both approaches are useful; network analysis (application of graph theory) helps us understand how the signaling network is organized and what its information-processing capabilities are, whereas dynamical modeling helps us determine how the system changes in time and space upon receiving stimuli. Computational models have helped us identify a number of emergent properties that signaling networks possess. Such properties include ultrasensitivity, bistability, robustness, and noise-filtering capabilities. These properties endow cell-signaling networks with the ability to ignore small or transient signals and/or amplify signals to drive cellular machines that spawn numerous physiological functions associated with different cell states. Copyright © 2015 Cold Spring Harbor Laboratory Press; all rights reserved.

  12. Liner shipping hub network design in a competitive environment

    DEFF Research Database (Denmark)

    Gelareh, Shahin; Nickel, Stefan; Pisinger, David

    2010-01-01

    A mixed integer programming formulation is proposed for hub-and-spoke network design in a competitive environment. It addresses the competition between a newcomer liner service provider and an existing dominating operator, both operating on hub-and-spoke networks. The newcomer company maximizes its...... market share—which depends on the service time and transportation cost—by locating a predefined number of hubs at candidate ports and designing its network. While general-purpose solvers do not solve instances of even small size, an accelerated Lagrangian method combined with a primal heuristic obtains...

  13. Liner Shipping Hub Network Design in a Competitive Environment

    DEFF Research Database (Denmark)

    Gelareh, Shahin; Nickel, Stefan; Pisinger, David

    A new mixed integer programming formulation is proposed for hub-and-spoke network design in a competitive environment. It addresses competition between a newcomer liner service provider and an alliance, both operating on hub-and-spoke networks. The newcomer company maximizes its market share...... — proportional to service time and transportation cost —by locating a predefined number of hubs at candidate ports and designing its network. While general-purpose solvers do not solve instances of even small size, an accelerated lagrangian method coupled with a primal heuristic obtains very good bounds. Our...

  14. Integrating Network Management for Cloud Computing Services

    Science.gov (United States)

    2015-06-01

    DeviceConfigIsControl- lable is calculated based on whether the device is powered up, whether the device can be reachable via SSH /Telnet from the management network...lines of C# and C++ code, plus a number of internal libraries . At its core, it is a highly-available RESTful web service with persistent storage. Below

  15. Propagation models for computing biochemical reaction networks

    OpenAIRE

    Henzinger, Thomas A; Mateescu, Maria

    2011-01-01

    We introduce propagation models, a formalism designed to support general and efficient data structures for the transient analysis of biochemical reaction networks. We give two use cases for propagation abstract data types: the uniformization method and numerical integration. We also sketch an implementation of a propagation abstract data type, which uses abstraction to approximate states.

  16. Predictive Control of Networked Multiagent Systems via Cloud Computing.

    Science.gov (United States)

    Liu, Guo-Ping

    2017-01-18

    This paper studies the design and analysis of networked multiagent predictive control systems via cloud computing. A cloud predictive control scheme for networked multiagent systems (NMASs) is proposed to achieve consensus and stability simultaneously and to compensate for network delays actively. The design of the cloud predictive controller for NMASs is detailed. The analysis of the cloud predictive control scheme gives the necessary and sufficient conditions of stability and consensus of closed-loop networked multiagent control systems. The proposed scheme is verified to characterize the dynamical behavior and control performance of NMASs through simulations. The outcome provides a foundation for the development of cooperative and coordinative control of NMASs and its applications.

  17. Secure Enclaves: An Isolation-centric Approach for Creating Secure High Performance Computing Environments

    Energy Technology Data Exchange (ETDEWEB)

    Aderholdt, Ferrol [Tennessee Technological Univ., Cookeville, TN (United States); Caldwell, Blake A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hicks, Susan Elaine [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Koch, Scott M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Naughton, III, Thomas J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pelfrey, Daniel S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pogge, James R [Tennessee Technological Univ., Cookeville, TN (United States); Scott, Stephen L [Tennessee Technological Univ., Cookeville, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sorrillo, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-01-01

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges for the use of shared infrastructure in HPC environments. This report details current state-of-the-art in virtualization, reconfigurable network enclaving via Software Defined Networking (SDN), and storage architectures and bridging techniques for creating secure enclaves in HPC environments.

  18. Low Computational Complexity Network Coding For Mobile Networks

    DEFF Research Database (Denmark)

    Heide, Janus

    2012-01-01

    -flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...... library and will be available for researchers and students in the future. Chapter 1 introduces motivating examples and the state of art when this work commenced. In Chapter 2 selected publications are presented and how their content is related. Chapter 3 presents the main outcome of the work and briefly...

  19. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    Science.gov (United States)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  20. Modelling Mobility in Mobile AD-HOC Network Environments ...

    African Journals Online (AJOL)

    We show how to implement the random waypoint mobility model for ad-hoc networks without pausing, through a more efficient and reliable computer simulation, using MATrix LABoratory 7.5.0 (R2007b). Simulation results obtained verify the correctness of the model. Keywords : Stationary, random waypoint, simulation, ...

  1. INTELLIGENT MULTI-AGENT PLATFORM WITHIN COLLABORATIVE NETWORKED ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Adina-Georgeta CREŢAN

    2016-06-01

    Full Text Available This paper proposes an agent-based intelligent platform to model and support parallel and concurrent negotiations among organizations acting in the same industrial market. The underlying complexity is to model the dynamic environment where multi-attribute and multi-participant negotiations are racing over a set of heterogeneous resources. The metaphor Interaction Abstract Machines (IAMs is used to model the parallelism and the non-deterministic aspects of the negotiation processes that occur in Collaborative Networked Environment.

  2. AGENT-BASED NEGOTIATION PLATFORM IN COLLABORATIVE NETWORKED ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Adina-Georgeta CREȚAN

    2014-05-01

    Full Text Available This paper proposes an agent-based platform to model and support parallel and concurrent negotiations among organizations acting in the same industrial market. The underlying complexity is to model the dynamic environment where multi-attribute and multi-participant negotiations are racing over a set of heterogeneous resources. The metaphor Interaction Abstract Machines (IAMs is used to model the parallelism and the non-deterministic aspects of the negotiation processes that occur in Collaborative Networked Environment.

  3. Field test of wireless sensor network in the nuclear environment

    Energy Technology Data Exchange (ETDEWEB)

    Li, L., E-mail: lil@aecl.ca [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Wang, Q.; Bari, A. [Univ. of Western Ontario, London, Ontario (Canada); Deng, C.; Chen, D. [Univ. of Electronic Science and Technology of China, Chengdu, Sichuan (China); Jiang, J. [Univ. of Western Ontario, London, Ontario (Canada); Alexander, Q.; Sur, B. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada)

    2014-06-15

    Wireless sensor networks (WSNs) are appealing options for the health monitoring of nuclear power plants due to their low cost and flexibility. Before they can be used in highly regulated nuclear environments, their reliability in the nuclear environment and compatibility with existing devices have to be assessed. In situ electromagnetic interference tests, wireless signal propagation tests, and nuclear radiation hardness tests conducted on candidate WSN systems at AECL Chalk River Labs are presented. The results are favourable to WSN in nuclear applications. (author)

  4. Computational path planner for product assembly in complex environments

    Science.gov (United States)

    Shang, Wei; Liu, Jianhua; Ning, Ruxin; Liu, Mi

    2013-03-01

    Assembly path planning is a crucial problem in assembly related design and manufacturing processes. Sampling based motion planning algorithms are used for computational assembly path planning. However, the performance of such algorithms may degrade much in environments with complex product structure, narrow passages or other challenging scenarios. A computational path planner for automatic assembly path planning in complex 3D environments is presented. The global planning process is divided into three phases based on the environment and specific algorithms are proposed and utilized in each phase to solve the challenging issues. A novel ray test based stochastic collision detection method is proposed to evaluate the intersection between two polyhedral objects. This method avoids fake collisions in conventional methods and degrades the geometric constraint when a part has to be removed with surface contact with other parts. A refined history based rapidly-exploring random tree (RRT) algorithm which bias the growth of the tree based on its planning history is proposed and employed in the planning phase where the path is simple but the space is highly constrained. A novel adaptive RRT algorithm is developed for the path planning problem with challenging scenarios and uncertain environment. With extending values assigned on each tree node and extending schemes applied, the tree can adapts its growth to explore complex environments more efficiently. Experiments on the key algorithms are carried out and comparisons are made between the conventional path planning algorithms and the presented ones. The comparing results show that based on the proposed algorithms, the path planner can compute assembly path in challenging complex environments more efficiently and with higher success. This research provides the references to the study of computational assembly path planning under complex environments.

  5. Human-Computer Interaction (HCI) in Educational Environments: Implications of Understanding Computers as Media.

    Science.gov (United States)

    Berg, Gary A.

    2000-01-01

    Reviews literature in the field of human-computer interaction (HCI) as it applies to educational environments. Topics include the origin of HCI; human factors; usability; computer interface design; goals, operations, methods, and selection (GOMS) models; command language versus direct manipulation; hypertext; visual perception; interface…

  6. Wearable Notification via Dissemination Service in a Pervasive Computing Environment

    Science.gov (United States)

    2015-09-01

    electricity to computing. He explains that hundreds of volts coursing through wires in walls may have been intimidating at one time, but is now accepted...XE22 and Moto 360 smartwatch was used for testing purposes. Android Studio 0.8.6 has been used as an integrated development environment (IDE) for

  7. Student Perspectives of Computer Literacy Education in an International Environment

    Science.gov (United States)

    Vasilache, Simona

    2016-01-01

    Computer literacy education is an integral part of early university education (it often starts at the high school level). A wide variety of university course structures and teaching styles exist and, at the same time, the knowledge levels of incoming students are varied. This issue is even more pressing in an international environment. This paper…

  8. Computer modeling of dosimetric pattern in aquatic environment of ...

    African Journals Online (AJOL)

    The dose distribution functions for the three sources of radiation in the environment have been reviewed. The model representing the geometry of aquatic organisms have been employed in computationally solving the dose rates to aquatic organisms with emphasis on the coastal areas of Nigeria where oil exploration ...

  9. Semiotics of Interactive and Manipulative Graphics in Computer Learning Environments.

    Science.gov (United States)

    Levonen, Jarmo J.

    1995-01-01

    Proposes that a semiotic approach can be used as a supplementary method in assessing the denotations and connotations of the signs and their relationships in computer learning environments. Utilizes the semiotic framework to study visual images, especially how multiple, interactive, and manipulative statistical representations affect the…

  10. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  11. Development of Computer Science Disciplines - A Social Network Analysis Approach

    CERN Document Server

    Pham, Manh Cuong; Jarke, Matthias

    2011-01-01

    In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and workshop proceedings. That results in an imprecise and incomplete analysis of the computer science knowledge. This paper presents an analysis on the computer science knowledge network constructed from all types of publications, aiming at providing a complete view of computer science research. Based on the combination of two important digital libraries (DBLP and CiteSeerX), we study the knowledge network created at journal/conference level using citation linkage, to identify the development of sub-disciplines. We investiga...

  12. FY 1999 Blue Book: Computing, Information, and Communications: Networked Computing for the 21st Century

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — U.S.research and development R and D in computing, communications, and information technologies has enabled unprecedented scientific and engineering advances,...

  13. GATE Monte Carlo simulation in a cloud computing environment

    Science.gov (United States)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  14. Managing Complex Battlespace Environments Using Attack the Network Methodologies

    DEFF Research Database (Denmark)

    Mitchell, Dr. William L.

    This paper examines the last 8 years of development and application of Attack the Network (AtN) intelligence methodologies for creating shared situational understanding of complex battlespace environment and the development of deliberate targeting frameworks. It will present a short history....... Including their possible application on a national security level for managing longer strategic endeavors....

  15. The dynamic wave expansion neural network model for robot motion planning in time-varying environments.

    Science.gov (United States)

    Lebedev, Dmitry V; Steil, Jochen J; Ritter, Helge J

    2005-04-01

    We introduce a new type of neural network--the dynamic wave expansion neural network (DWENN)--for path generation in a dynamic environment for both mobile robots and robotic manipulators. Our model is parameter-free, computationally efficient, and its complexity does not explicitly depend on the dimensionality of the configuration space. We give a review of existing neural networks for trajectory generation in a time-varying domain, which are compared to the presented model. We demonstrate several representative simulative comparisons as well as the results of long-run comparisons in a number of randomly-generated scenes, which reveal that the proposed model yields dominantly shorter paths, especially in highly-dynamic environments.

  16. Dynamic Defensive Posture for Computer Network Defence

    Science.gov (United States)

    2006-12-01

    des algorithmes pour le classement de la sévérité des attaques sur le réseau et des mécanismes permettant d’attribuer une valeur aux éléments...power outages and social engineering attacks. Because it has such a large knowledge base on which to draw, it can reason very thoroughly about network...service attacks, eavesdropping and sniffing attacks on data in transit, or data tampering; more complex still would be models of social engineering

  17. Proposal of an Effective Computation Environment for the Traveling Salesman Problem Using Cloud Computing

    Science.gov (United States)

    Mizuno, Shinya; Iwamoto, Shogo; Yamaki, Naokazu

    Various methods have been proposed to solve the traveling salesman problem, referred to as the TSP. In order to solve the TSP, the cost metric (e.g., the travel time and distance) between nodes is needed. As we do not always have specific criterion for the cost metric we are proposing using a new computation environment that is used all over the world—Google Maps. We think a cost metric obtained from Google maps is a good, impartial value with little room for variation, making it easier and more efficient to make map information visible. Moreover, a scalable computation environment can be prepared by using cloud computing technology. We can even expand the TSP and calculate routes taken by multiple people. The numerical results show this computation environment to be effective.

  18. Characterization and Planning for Computer Network Operations

    Science.gov (United States)

    2010-07-01

    Cell phones, personal computers, laptops, and personal digital assistants represent a small number of the technology-based devices used around the...C. Simpson, editors. Assistive Technol- ogy and Artificial Intelligence, Applications in Robotics, User Interfaces and Natural Language Processing...retrieval agents: Experiments with automated web browsing. pages 13–18, 1995. [206] V. A. Siris and F. Papagalou. Application of anomaly detection

  19. Wirelessly powered sensor networks and computational RFID

    CERN Document Server

    2013-01-01

    The Wireless Identification and Sensing Platform (WISP) is the first of a new class of RF-powered sensing and computing systems.  Rather than being powered by batteries, these sensor systems are powered by radio waves that are either deliberately broadcast or ambient.  Enabled by ongoing exponential improvements in the energy efficiency of microelectronics, RF-powered sensing and computing is rapidly moving along a trajectory from impossible (in the recent past), to feasible (today), toward practical and commonplace (in the near future). This book is a collection of key papers on RF-powered sensing and computing systems including the WISP.  Several of the papers grew out of the WISP Challenge, a program in which Intel Corporation donated WISPs to academic applicants who proposed compelling WISP-based projects.  The book also includes papers presented at the first WISP Summit, a workshop held in Berkeley, CA in association with the ACM Sensys conference, as well as other relevant papers. The book provides ...

  20. Towards a Versatile Tele-Education Platform for Computer Science Educators Based on the Greek School Network

    Science.gov (United States)

    Paraskevas, Michael; Zarouchas, Thomas; Angelopoulos, Panagiotis; Perikos, Isidoros

    2013-01-01

    Now days the growing need for highly qualified computer science educators in modern educational environments is commonplace. This study examines the potential use of Greek School Network (GSN) to provide a robust and comprehensive e-training course for computer science educators in order to efficiently exploit advanced IT services and establish a…

  1. Computing Path Tables for Quickest Multipaths In Computer Networks

    Energy Technology Data Exchange (ETDEWEB)

    Grimmell, W.C.

    2004-12-21

    We consider the transmission of a message from a source node to a terminal node in a network with n nodes and m links where the message is divided into parts and each part is transmitted over a different path in a set of paths from the source node to the terminal node. Here each link is characterized by a bandwidth and delay. The set of paths together with their transmission rates used for the message is referred to as a multipath. We present two algorithms that produce a minimum-end-to-end message delay multipath path table that, for every message length, specifies a multipath that will achieve the minimum end-to-end delay. The algorithms also generate a function that maps the minimum end-to-end message delay to the message length. The time complexities of the algorithms are O(n{sup 2}((n{sup 2}/logn) + m)min(D{sub max}, C{sub max})) and O(nm(C{sub max} + nmin(D{sub max}, C{sub max}))) when the link delays and bandwidths are non-negative integers. Here D{sub max} and C{sub max} are respectively the maximum link delay and maximum link bandwidth and C{sub max} and D{sub max} are greater than zero.

  2. Dynamic social network analysis using conversational dynamics in social networking and microblogging environments

    Science.gov (United States)

    Stocco, Gabriel; Savell, Robert; Cybenko, George

    2010-04-01

    In many security environments, the textual content of communications may be unavailable. In these instances, it is often desirable to infer the status of the network and its component entities from patterns of communication flow. Conversational dynamics among entities in the network may provide insight into important aspects of the underlying social network such as the formational dynamics of group structures, the active state of these groups, individuals' roles within groups, and the likelihood of individual participation in conversations. To gain insight into the use of conversational dynamics to facilitate Dynamic Social Network Analysis, we explore the use of interevent timings to associate entities in the Twitter social networking and micro-blogging environment. Specifically, we use message timings to establish inter-nodal relationships among participants. In addition, we demonstrate a new visualization technique for tracking levels of coordination or synchronization within the community via measures of socio-temporal coherence of the participants.

  3. On Using Home Networks and Cloud Computing for a Future Internet of Things

    Science.gov (United States)

    Niedermayer, Heiko; Holz, Ralph; Pahl, Marc-Oliver; Carle, Georg

    In this position paper we state four requirements for a Future Internet and sketch our initial concept. The requirements: (1) more comfort, (2) integration of home networks, (3) resources like service clouds in the network, and (4) access anywhere on any machine. Future Internet needs future quality and future comfort. There need to be new possiblities for everyone. Our focus is on higher layers and related to the many overlay proposals. We consider them to run on top of a basic Future Internet core. A new user experience means to include all user devices. Home networks and services should be a fundamental part of the Future Internet. Home networks extend access and allow interaction with the environment. Cloud Computing can provide reliable resources beyond local boundaries. For access anywhere, we also need secure storage for data and profiles in the network, in particular for access with non-personal devices (Internet terminal, ticket machine, ...).

  4. The Poor Man's Guide to Computer Networks and their Applications

    DEFF Research Database (Denmark)

    Sharp, Robin

    2003-01-01

    These notes for DTU course 02220, Concurrent Programming, give an introduction to computer networks, with focus on the modern Internet. Basic Internet protocols such as IP, TCP and UDP are presented, and two Internet application protocols, SMTP and HTTP, are described in some detail. Techniques f...... for network programming are described, with concrete examples in Java. Techniques considered include simple socket programming, RMI, Corba, and Web services with SOAP....

  5. Design, implementation and security of a typical educational laboratory computer network

    Directory of Open Access Journals (Sweden)

    Martin Pokorný

    2013-01-01

    Full Text Available Computer network used for laboratory training and for different types of network and security experiments represents a special environment where hazardous activities take place, which may not affect any production system or network. It is common that students need to have administrator privileges in this case which makes the overall security and maintenance of such a network a difficult task. We present our solution which has proved its usability for more than three years. First of all, four user requirements on the laboratory network are defined (access to educational network devices, to laboratory services, to the Internet, and administrator privileges of the end hosts, and four essential security rules are stipulated (enforceable end host security, controlled network access, level of network access according to the user privilege level, and rules for hazardous experiments, which protect the rest of the laboratory infrastructure as well as the outer university network and the Internet. The main part of the paper is dedicated to a design and implementation of these usability and security rules. We present a physical diagram of a typical laboratory network based on multiple circuits connecting end hosts to different networks, and a layout of rack devices. After that, a topological diagram of the network is described which is based on different VLANs and port-based access control using the IEEE 802.1x/EAP-TLS/RADIUS authentication to achieve defined level of network access. In the second part of the paper, the latest innovation of our network is presented that covers a transition to the system virtualization at the end host devices – inspiration came from a similar solution deployed at the Department of Telecommunications at Brno University of Technology. This improvement enables a greater flexibility in the end hosts maintenance and a simultaneous network access to the educational devices as well as to the Internet. In the end, a vision of a

  6. Adaptive Management of Computing and Network Resources for Spacecraft Systems

    Science.gov (United States)

    Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.

  7. Glycosylation Network Analysis Toolbox: a MATLAB-based environment for systems glycobiology

    Science.gov (United States)

    Liu, Gang; Neelamegham, Sriram

    2013-01-01

    Summary: Systems glycobiology studies the interaction of various pathways that regulate glycan biosynthesis and function. Software tools for the construction and analysis of such pathways are not yet available. We present GNAT, a platform-independent, user-extensible MATLAB-based toolbox that provides an integrated computational environment to construct, manipulate and simulate glycans and their networks. It enables integration of XML-based glycan structure data into SBML (Systems Biology Markup Language) files that describe glycosylation reaction networks. Curation and manipulation of networks is facilitated using class definitions and glycomics database query tools. High quality visualization of networks and their steady-state and dynamic simulation are also supported. Availability: The software package including source code, help documentation and demonstrations are available at http://sourceforge.net/projects/gnatmatlab/files/. Contact: neel@buffalo.edu or gangliu@buffalo.edu PMID:23230149

  8. Efficient Capacity Computation and Power Optimization for Relay Networks

    CERN Document Server

    Parvaresh, Farzad

    2011-01-01

    The capacity or approximations to capacity of various single-source single-destination relay network models has been characterized in terms of the cut-set upper bound. In principle, a direct computation of this bound requires evaluating the cut capacity over exponentially many cuts. We show that the minimum cut capacity of a relay network under some special assumptions can be cast as a minimization of a submodular function, and as a result, can be computed efficiently. We use this result to show that the capacity, or an approximation to the capacity within a constant gap for the Gaussian, wireless erasure, and Avestimehr-Diggavi-Tse deterministic relay network models can be computed in polynomial time. We present some empirical results showing that computing constant-gap approximations to the capacity of Gaussian relay networks with around 300 nodes can be done in order of minutes. For Gaussian networks, cut-set capacities are also functions of the powers assigned to the nodes. We consider a family of power o...

  9. Building Social Networks with Computer Networks: A New Deal for Teaching and Learning.

    Science.gov (United States)

    Thurston, Thomas

    2001-01-01

    Discusses the role of computer technology and Web sites in expanding social networks. Focuses on the New Deal Network using two examples: (1) uniting a Julia C. Lathrop Housing (Chicago, Illinois) resident with a university professor; and (2) saving the Hugo Gellert art murals at the Seward Park Coop Apartments (New York). (CMK)

  10. Service-oriented Software Defined Optical Networks for Cloud Computing

    Science.gov (United States)

    Liu, Yuze; Li, Hui; Ji, Yuefeng

    2017-10-01

    With the development of big data and cloud computing technology, the traditional software-defined network is facing new challenges (e.g., ubiquitous accessibility, higher bandwidth, more flexible management and greater security). This paper proposes a new service-oriented software defined optical network architecture, including a resource layer, a service abstract layer, a control layer and an application layer. We then dwell on the corresponding service providing method. Different service ID is used to identify the service a device can offer. Finally, we experimentally evaluate that proposed service providing method can be applied to transmit different services based on the service ID in the service-oriented software defined optical network.

  11. A local area computer network expert system framework

    Science.gov (United States)

    Dominy, Robert

    1987-01-01

    Over the past years an expert system called LANES designed to detect and isolate faults in the Goddard-wide Hybrid Local Area Computer Network (LACN) was developed. As a result, the need for developing a more generic LACN fault isolation expert system has become apparent. An object oriented approach was explored to create a set of generic classes, objects, rules, and methods that would be necessary to meet this need. The object classes provide a convenient mechanism for separating high level information from low level network specific information. This approach yeilds a framework which can be applied to different network configurations and be easily expanded to meet new needs.

  12. Operational computer graphics in the flight dynamics environment

    Science.gov (United States)

    Jeletic, James F.

    1989-01-01

    Over the past five years, the Flight Dynamics Division of the National Aeronautics and Space Administration's (NASA's) Goddard Space Flight Center has incorporated computer graphics technology into its operational environment. In an attempt to increase the effectiveness and productivity of the Division, computer graphics software systems have been developed that display spacecraft tracking and telemetry data in 2-d and 3-d graphic formats that are more comprehensible than the alphanumeric tables of the past. These systems vary in functionality from real-time mission monitoring system, to mission planning utilities, to system development tools. Here, the capabilities and architecture of these systems are discussed.

  13. Test experience on an ultrareliable computer communication network

    Science.gov (United States)

    Abbott, L. W.

    1984-01-01

    The dispersed sensor processing mesh (DSPM) is an experimental, ultra-reliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.

  14. Analytical Computation of the Epidemic Threshold on Temporal Networks

    Directory of Open Access Journals (Sweden)

    Eugenio Valdano

    2015-04-01

    Full Text Available The time variation of contacts in a networked system may fundamentally alter the properties of spreading processes and affect the condition for large-scale propagation, as encoded in the epidemic threshold. Despite the great interest in the problem for the physics, applied mathematics, computer science, and epidemiology communities, a full theoretical understanding is still missing and currently limited to the cases where the time-scale separation holds between spreading and network dynamics or to specific temporal network models. We consider a Markov chain description of the susceptible-infectious-susceptible process on an arbitrary temporal network. By adopting a multilayer perspective, we develop a general analytical derivation of the epidemic threshold in terms of the spectral radius of a matrix that encodes both network structure and disease dynamics. The accuracy of the approach is confirmed on a set of temporal models and empirical networks and against numerical results. In addition, we explore how the threshold changes when varying the overall time of observation of the temporal network, so as to provide insights on the optimal time window for data collection of empirical temporal networked systems. Our framework is of both fundamental and practical interest, as it offers novel understanding of the interplay between temporal networks and spreading dynamics.

  15. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    Science.gov (United States)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  16. Assessment of a human computer interface prototyping environment

    Science.gov (United States)

    Moore, Loretta A.

    1993-01-01

    A Human Computer Interface (HCI) prototyping environment with embedded evaluation capability has been successfully assessed which will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. The HCI prototyping environment is designed to include four components: (1) a HCI format development tool, (2) a test and evaluation simulator development tool, (3) a dynamic, interactive interface between the HCI prototype and simulator, and (4) an embedded evaluation capability to evaluate the adequacy of an HCI based on a user's performance.

  17. A computer-based building design support environment

    Energy Technology Data Exchange (ETDEWEB)

    Papamichael, K.; Selkowitz, S.E.

    1991-06-01

    Continuously decreasing cost has brought computers into most architectural and engineering offices, most commonly for activities such as drafting, accounting and word processing. Computers are used less often to predict the performance of design solutions. However, most performance simulation software packages are simplified versions of main-frame analytical tools, originally developed for research. Such software packages focus on specific design issues according to the research needs. Also, the data input requirements are complicated and incompatible with each other, and the output data are usually specialized and difficult to interpret. It is yet to be seen how the increasing memory and processing speed of computers, the two main advantages that computers have over the human brain, can be used to assist designers throughout the design process, allowing them to organize design projects electronically. We describe the design and initial implementation of a computer-based Building Design Support Environment whose structure and operation are derived from a detailed theoretical analysis of the design process, into the iterative and interactive activities that contribute towards the formulation of design criteria, the generation of potential solutions, and their evaluation. The identified design activities are characterized with respect to the nature of knowledge requirements and the degree to which they can be specified and delegated to computers. The results are considered as criteria to determine the level of automation and the interaction between designers and computers, to model the delegateable and non-delegateable activities, respectively. We believe this approach, when fully implemented, has a good chance of providing building designers with a powerful environment to enhance building design.

  18. Integrated Environment for Ubiquitous Healthcare and Mobile IPv6 Networks

    Science.gov (United States)

    Cagalaban, Giovanni; Kim, Seoksoo

    The development of Internet technologies based on the IPv6 protocol will allow real-time monitoring of people with health deficiencies and improve the independence of elderly people. This paper proposed a ubiquitous healthcare system for the personalized healthcare services with the support of mobile IPv6 networks. Specifically, this paper discusses the integration of ubiquitous healthcare and wireless networks and its functional requirements. This allow an integrated environment where heterogeneous devices such a mobile devices and body sensors can continuously monitor patient status and communicate remotely with healthcare servers, physicians, and family members to effectively deliver healthcare services.

  19. Identifying failure in a tree network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  20. Regional Computation of TEC Using a Neural Network Model

    Science.gov (United States)

    Leandro, R. F.; Santos, M. C.

    2004-05-01

    One of the main sources of errors of GPS measurements is the ionosphere refraction. As a dispersive medium, the ionosphere allow its influence to be computed by using dual frequency receivers. In the case of single frequency receivers it is necessary to use models that tell us how big the ionospheric refraction is. The GPS broadcast message carries parameters of this model, namely Klobuchar model. Dual frequency receivers allow to estimate the influence of ionosphere in the GPS signal by the computation of TEC (Total Electron Content) values, that have a direct relationship with the magnitude of the delay caused by the ionosphere. One alternative is to create a regional model based on a network of dual frequency receivers. In this case, the regional behaviour of ionosphere is modelled in a way that it is possible to estimate the TEC values into or near this region. This regional model can be based on polynomials, for example. In this work we will present a Neural Network-based model to the regional computation of TEC. The advantage of using a Neural Network is that it is not necessary to have a great knowledge on the behaviour of the modelled surface due to the adaptation capability of neural networks training process, that is an iterative adjust of the synaptic weights in function of residuals, using the training parameters. Therefore, the previous knowledge of the modelled phenomena is important to define what kind of and how many parameters are needed to train the neural network so that reasonable results are obtained from the estimations. We have used data from the GPS tracking network in Brazil, and we have tested the accuracy of the new model to all locations where there is a station, accessing the efficiency of the model everywhere. TEC values were computed for each station of the network. After that the training parameters data set for the test station was formed, with the TEC values of all others (all stations, except the test one). The Neural Network was

  1. Design, Implementation and Optimization of Innovative Internet Access Networks, based on Fog Computing and Software Defined Networking

    OpenAIRE

    Iotti, Nicola

    2017-01-01

    1. DESIGN In this dissertation we introduce a new approach to Internet access networks in public spaces, such as Wi-Fi network commonly known as Hotspot, based on Fog Computing (or Edge Computing), Software Defined Networking (SDN) and the deployment of Virtual Machines (VM) and Linux containers, on the edge of the network. In this vision we deploy specialized network elements, called Fog Nodes, on the edge of the network, able to virtualize the physical infrastructure and expose APIs to e...

  2. Small-world networks in neuronal populations: a computational perspective.

    Science.gov (United States)

    Zippo, Antonio G; Gelsomino, Giuliana; Van Duin, Pieter; Nencini, Sara; Caramenti, Gian Carlo; Valente, Maurizio; Biella, Gabriele E M

    2013-08-01

    The analysis of the brain in terms of integrated neural networks may offer insights on the reciprocal relation between structure and information processing. Even with inherent technical limits, many studies acknowledge neuron spatial arrangements and communication modes as key factors. In this perspective, we investigated the functional organization of neuronal networks by explicitly assuming a specific functional topology, the small-world network. We developed two different computational approaches. Firstly, we asked whether neuronal populations actually express small-world properties during a definite task, such as a learning task. For this purpose we developed the Inductive Conceptual Network (ICN), which is a hierarchical bio-inspired spiking network, capable of learning invariant patterns by using variable-order Markov models implemented in its nodes. As a result, we actually observed small-world topologies during learning in the ICN. Speculating that the expression of small-world networks is not solely related to learning tasks, we then built a de facto network assuming that the information processing in the brain may occur through functional small-world topologies. In this de facto network, synchronous spikes reflected functional small-world network dependencies. In order to verify the consistency of the assumption, we tested the null-hypothesis by replacing the small-world networks with random networks. As a result, only small world networks exhibited functional biomimetic characteristics such as timing and rate codes, conventional coding strategies and neuronal avalanches, which are cascades of bursting activities with a power-law distribution. Our results suggest that small-world functional configurations are liable to underpin brain information processing at neuronal level. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Quality control of computational fluid dynamics in indoor environments

    DEFF Research Database (Denmark)

    Sørensen, Dan Nørtoft; Nielsen, P. V.

    2003-01-01

    Computational fluid dynamics (CFD) is used routinely to predict air movement and distributions of temperature and concentrations in indoor environments. Modelling and numerical errors are inherent in such studies and must be considered when the results are presented. Here, we discuss modelling as...... the quality of CFD calculations, as well as guidelines for the minimum information that should accompany all CFD-related publications to enable a scientific judgment of the quality of the study....

  4. Department of the Navy Naval Networking Environment (NNE)-2016. Strategic Definition, Scope and Strategy Paper, Version 1.1

    Science.gov (United States)

    2008-05-13

    optimized network environment with many nodes not capable of operating in a globally networked environment . In today’s changing environment of network... environment to enhance the Department’s organizational flexibility and global awareness. This environment must facilitate the rapid information sharing...Department of the Naval Networking Environment (NNE)~2016 Strategic Definition, Scope and Strategy Department of the Navy Naval Networking

  5. Network Management Services Based On The Openflow Environment

    Directory of Open Access Journals (Sweden)

    Paweł Wilk

    2014-01-01

    Full Text Available The subject of this article is network management through web service calls, which allows software applications to exert an influence on network traffic. In this manner, software can make independent decisions concerning the direction of requests so that they can be served as soon as possible. This is important because only proper cooperation including all architecture layers can ensure the best performance, especially when software that largely depends on computer networks and utilizes them heavily is involved. To demonstrate that the approach described above is feasible and can be useful at the same time, this article presents a switch-level load balancer developed using OpenFlow. Client software communicates with the balancer through REST web service calls, which are used to provide information on current machine load and its ability to serve incoming requests. The result is a cheap, highly customizable and extremely fast load balancer with considerable potential for further development.

  6. Job Superscheduler Architecture and Performance in Computational Grid Environments

    Science.gov (United States)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  7. A computational study of routing algorithms for realistic transportation networks

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, R.; Marathe, M.V.; Nagel, K.

    1998-12-01

    The authors carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (Transportation Analysis and Simulation System) project. The main focus of the paper is to study how various heuristic and exact solutions, associated data structures affected the computational performance of the software developed especially for realistic transportation networks. For this purpose the authors have used Dallas Fort-Worth road network with very high degree of resolution. The following general results are obtained: (1) they discuss and experimentally analyze various one-one shortest path algorithms, which include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure of the input instances; (2) they describe a number of extensions to the basic shortest path algorithm. These extensions were primarily motivated by practical problems arising in TRANSIMS and ITS (Intelligent Transportation Systems) related technologies. Extensions discussed include--(i) time dependent networks, (ii) multi-modal networks, (iii) networks with public transportation and associated schedules. Computational results are provided to empirically compare the efficiency of various algorithms. The studies indicate that a modified Dijkstra`s algorithm is computationally fast and an excellent candidate for use in various transportation planning applications as well as ITS related technologies.

  8. Improving a Computer Networks Course Using the Partov Simulation Engine

    Science.gov (United States)

    Momeni, B.; Kharrazi, M.

    2012-01-01

    Computer networks courses are hard to teach as there are many details in the protocols and techniques involved that are difficult to grasp. Employing programming assignments as part of the course helps students to obtain a better understanding and gain further insight into the theoretical lectures. In this paper, the Partov simulation engine and…

  9. Biological networks 101: computational modeling for molecular biologists

    NARCIS (Netherlands)

    Scholma, Jetse; Schivo, Stefano; Urquidi Camacho, Ricardo A.; van de Pol, Jan Cornelis; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole

    2014-01-01

    Computational modeling of biological networks permits the comprehensive analysis of cells and tissues to define molecular phenotypes and novel hypotheses. Although a large number of software tools have been developed, the versatility of these tools is limited by mathematical complexities that

  10. Fish species recognition using computer vision and a neural network

    NARCIS (Netherlands)

    Storbeck, F.; Daan, B.

    2001-01-01

    A system is described to recognize fish species by computer vision and a neural network program. The vision system measures a number of features of fish as seen by a camera perpendicular to a conveyor belt. The features used here are the widths and heights at various locations along the fish. First

  11. Computing Nash Equilibrium in Wireless Ad Hoc Networks

    DEFF Research Database (Denmark)

    Bulychev, Peter E.; David, Alexandre; Larsen, Kim G.

    2012-01-01

    This paper studies the problem of computing Nash equilibrium in wireless networks modeled by Weighted Timed Automata. Such formalism comes together with a logic that can be used to describe complex features such as timed energy constraints. Our contribution is a method for solving this problem...

  12. High Performance Computing and Networking for Science--Background Paper.

    Science.gov (United States)

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…

  13. An Analysis of Attitudes toward Computer Networks and Internet Addiction.

    Science.gov (United States)

    Tsai, Chin-Chung; Lin, Sunny S. J.

    The purpose of this study was to explore the interplay between young people's attitudes toward computer networks and Internet addiction. After analyzing questionnaire responses of an initial sample of 615 Taiwanese high school students, 78 subjects, viewed as possible Internet addicts, were selected for further explorations. It was found that…

  14. Computer-Supported Modelling of Multi modal Transportation Networks Rationalization

    Directory of Open Access Journals (Sweden)

    Ratko Zelenika

    2007-09-01

    Full Text Available This paper deals with issues of shaping and functioning ofcomputer programs in the modelling and solving of multimoda Itransportation network problems. A methodology of an integrateduse of a programming language for mathematical modellingis defined, as well as spreadsheets for the solving of complexmultimodal transportation network problems. The papercontains a comparison of the partial and integral methods ofsolving multimodal transportation networks. The basic hypothesisset forth in this paper is that the integral method results inbetter multimodal transportation network rationalization effects,whereas a multimodal transportation network modelbased on the integral method, once built, can be used as the basisfor all kinds of transportation problems within multimodaltransport. As opposed to linear transport problems, multimodaltransport network can assume very complex shapes. This papercontains a comparison of the partial and integral approach totransp01tation network solving. In the partial approach, astraightforward model of a transp01tation network, which canbe solved through the use of the Solver computer tool within theExcel spreadsheet inteiface, is quite sufficient. In the solving ofa multimodal transportation problem through the integralmethod, it is necessmy to apply sophisticated mathematicalmodelling programming languages which supp01t the use ofcomplex matrix functions and the processing of a vast amountof variables and limitations. The LINGO programming languageis more abstract than the Excel spreadsheet, and it requiresa certain programming knowledge. The definition andpresentation of a problem logic within Excel, in a manner whichis acceptable to computer software, is an ideal basis for modellingin the LINGO programming language, as well as a fasterand more effective implementation of the mathematical model.This paper provides proof for the fact that it is more rational tosolve the problem of multimodal transportation networks by

  15. A Cognitive Approach to Network Monitoring in Heterogeneous Environments

    DEFF Research Database (Denmark)

    Mihovska, Albena D.

    2007-01-01

    of information (QoI). QoI means QoS while all the requirements for dependability, security, privacy and trust are satisfied at the highest possible level. This work proposes and describes an approach to network monitoring in a heterogeneous communication environment based on use of cognitive techniques...... for efficient resource allocation, provisioning of network resources or for detection of security violations into the traditional network monitoring approach. The paper describes the cognitive monitoring architecture, the required physical and logical entities, and their functionalities. Further, the paper......Abstract— Introducing intelligence by means of cognition for managing, protecting, processing, and delivering of information in mobile communication systems is the way towards ubiquitous, converged and secure communications. In this context, this paper introduces the concept of quality...

  16. Macro Monte Carlo: Clinical Implementation in a Distributed Computing Environment

    Science.gov (United States)

    Neuenschwander, H.; Volken, W.; Frei, D.; Cris, C.; Born, E.; Mini, R.

    The Monte Carlo (MC) method is the most accurate method for the calculation of dose distributions in radiotherapy treatment planning (RTP) for high energy electron beams, if the source of electrons and the patient geometry can be accurately modeled and a sufficiently large number of electron histories are simulated. Due to the long calculation times, MC methods have long been considered as impractical for clinical use. Two main advances have improved the situation and made clinical MC RTP feasible: The development of highly specialized radiotherapy MC systems, and the ever-falling price/performance ratio of computer hardware. Moreover, MC dose calculation codes can easily be parallelized, which allows their implementation as distributed computing systems in networked departments. This paper describes the implementation and clinical validation of the Macro Monte Carlo (MMC) method, a fast method for clinical electron beam treatment planning.

  17. Computer network time synchronization the network time protocol on earth and in space

    CERN Document Server

    Mills, David L

    2010-01-01

    Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib

  18. Monitoring the Environment in a Lava Tube with a Wireless Sensor Network

    Science.gov (United States)

    Li, Y.; Jorgensen, A. M.; Wilson, J. L.; Rendon, N. M.

    2010-12-01

    Monitoring cave environments is important for several reasons. For instance, through the studies of cave environments, we can better protect cave ecology. Past experiments have monitored cave environments, although most of those were based on individual sensor nodes such as data loggers. In this paper we introduce and discuss a ZigBee wireless sensor network-based platform used for cave environment monitoring. The platform is based on a Freescale ZigBee evaluation kit. We carried out a proof-of-concept experiment in Junction Cave, a lava tube, at El Malpais National Monument in New Mexico. That experiment monitored temperature, humidity, and air turbulence inside the cave. The instrumentation consisted of a turbulence tower with five thermocouple-based sensors, reaching from the floor to the ceiling of the cave, temperature/humidity sensors distributed throughout the cave, and a low-power embedded Linux computer for data collection and storage. The experiment measured interesting air turbulence variations at different heights, which we related to to weather changes outside the cave and human activities inside the cave. The experiment also observed variations of air temperature at different locations inside the cave. In this presentation we will discuss the instrumentation as well as interpretations of the observations. The experiment demonstrated that a ZigBee wireless sensor network-based monitoring system is a potentially feasible platform for a cave environment monitoring system. We also found that network reliability, node cost, and power consumption need to be improved for future systems.

  19. Computation emerges from adaptive synchronization of networking neurons.

    Directory of Open Access Journals (Sweden)

    Massimiliano Zanin

    Full Text Available The activity of networking neurons is largely characterized by the alternation of synchronous and asynchronous spiking sequences. One of the most relevant challenges that scientists are facing today is, then, relating that evidence with the fundamental mechanisms through which the brain computes and processes information, as well as with the arousal (or progress of a number of neurological illnesses. In other words, the problem is how to associate an organized dynamics of interacting neural assemblies to a computational task. Here we show that computation can be seen as a feature emerging from the collective dynamics of an ensemble of networking neurons, which interact by means of adaptive dynamical connections. Namely, by associating logical states to synchronous neuron's dynamics, we show how the usual Boolean logics can be fully recovered, and a universal Turing machine can be constructed. Furthermore, we show that, besides the static binary gates, a wider class of logical operations can be efficiently constructed as the fundamental computational elements interact within an adaptive network, each operation being represented by a specific motif. Our approach qualitatively differs from the past attempts to encode information and compute with complex systems, where computation was instead the consequence of the application of control loops enforcing a desired state into the specific system's dynamics. Being the result of an emergent process, the computation mechanism here described is not limited to a binary Boolean logic, but it can involve a much larger number of states. As such, our results can enlighten new concepts for the understanding of the real computing processes taking place in the brain.

  20. Synchronization-based computation through networks of coupled oscillators

    Directory of Open Access Journals (Sweden)

    Daniel eMalagarriga

    2015-08-01

    Full Text Available The mesoscopic activity of the brain is strongly dynamical, while at the sametime exhibiting remarkable computational capabilities. In order to examinehow these two features coexist, here we show that the patterns of synchronizedoscillations displayed by networks of neural mass models, representing cortical columns, can be usedas substrates for Boolean computation. Our results reveal that different logicaloperations can be implemented by the same neural mass network at different timesfollowing the dynamics of the input. The results are reproduced experimentallywith electronic circuits of coupled Chua oscillators, showing the robustness of this kind of computation to the intrinsic noise and parameter mismatch of the oscillators responsible for the functioning of the gates. We also show that theinformation-processing capabilities of coupled oscillations go beyond thesimple juxtaposition of logic gates.

  1. A knowledge-based system with learning for computer communication network design

    Science.gov (United States)

    Pierre, Samuel; Hoang, Hai Hoc; Tropper-Hausen, Evelyne

    1990-01-01

    Computer communication network design is well-known as complex and hard. For that reason, the most effective methods used to solve it are heuristic. Weaknesses of these techniques are listed and a new approach based on artificial intelligence for solving this problem is presented. This approach is particularly recommended for large packet switched communication networks, in the sense that it permits a high degree of reliability and offers a very flexible environment dealing with many relevant design parameters such as link cost, link capacity, and message delay.

  2. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networks.  Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  3. Connect the dot: Computing feed-links for network extension

    Directory of Open Access Journals (Sweden)

    Boris Aronov

    2011-12-01

    Full Text Available Road network analysis can require distance from points that are not on the network themselves. We study the algorithmic problem of connecting a point inside a face (region of the road network to its boundary while minimizing the detour factor of that point to any point on the boundary of the face. We show that the optimal single connection (feed-link can be computed in O(lambda_7(n log n time, where n is the number of vertices that bounds the face and lambda_7(n is the slightly superlinear maximum length of a Davenport-Schinzel sequence of order 7 on n symbols. We also present approximation results for placing more feed-links, deal with the case that there are obstacles in the face of the road network that contains the point to be connected, and present various related results.

  4. Computational modeling of signal transduction networks: a pedagogical exposition.

    Science.gov (United States)

    Prasad, Ashok

    2012-01-01

    We give a pedagogical introduction to computational modeling of signal transduction networks, starting from explaining the representations of chemical reactions by differential equations via the law of mass action. We discuss elementary biochemical reactions such as Michaelis-Menten enzyme kinetics and cooperative binding, and show how these allow the representation of large networks as systems of differential equations. We discuss the importance of looking for simpler or reduced models, such as network motifs or dynamical motifs within the larger network, and describe methods to obtain qualitative behavior by bifurcation analysis, using freely available continuation software. We then discuss stochastic kinetics and show how to implement easy-to-use methods of rule-based modeling for stochastic simulations. We finally suggest some methods for comprehensive parameter sensitivity analysis, and discuss the insights that it could yield. Examples, including code to try out, are provided based on a paper that modeled Ras kinetics in thymocytes.

  5. A distributed computing environment with support for constraint-based task scheduling and scientific experimentation

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L. [Univ. of Washington, Seattle, WA (United States). Dept. of Computer Science and Engineering

    1997-04-01

    This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and execute program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.

  6. Two-round contributory group key exchange protocol for wireless network environments

    Directory of Open Access Journals (Sweden)

    Wu Tsu-Yang

    2011-01-01

    Full Text Available Abstract With the popularity of group-oriented applications, secure group communication has recently received much attention from cryptographic researchers. A group key exchange (GKE protocol allows that participants cooperatively establish a group key that is used to encrypt and decrypt transmitted messages. Hence, GKE protocols can be used to provide secure group communication over a public network channel. However, most of the previously proposed GKE protocols deployed in wired networks are not fully suitable for wireless network environments with low-power computing devices. Subsequently, several GKE protocols suitable for mobile or wireless networks have been proposed. In this article, we will propose a more efficient group key exchange protocol with dynamic joining and leaving. Under the decision Diffie-Hellman (DDH, the computation Diffie-Hellman (CDH, and the hash function assumptions, we demonstrate that the proposed protocol is secure against passive attack and provides forward/backward secrecy for dynamic member joining/leaving. As compared with the recently proposed GKE protocols, our protocol provides better performance in terms of computational cost, round number, and communication cost.

  7. A computational method based on CVSS for quantifying the vulnerabilities in computer network

    Directory of Open Access Journals (Sweden)

    Shahriyar Mohammadi

    2014-10-01

    Full Text Available Network vulnerability taxonomy has become increasingly important in the area of information and data exchange not only for its potential use in identification of vulnerabilities but also in their assessment and prioritization. Computer networks play an important role in information and communication infrastructure. However, they are constantly exposed to a variety of vulnerability risks. In their attempts to create secure information exchange systems, scientists have concentrated on understanding the nature and typology of these vulnerabilities. Their efforts aimed at establishing secure networks have led to the development of a variety of methods and techniques for quantifying vulnerability. The objective of the present paper is developing a method based on the second edition of common vulnerability scoring system (CVSS for the quantification of Computer Network vulnerabilities. It is expected that the proposed model will help in the identification and effective management of vulnerabilities by their quantification.

  8. InSAR Scientific Computing Environment - The Home Stretch

    Science.gov (United States)

    Rosen, P. A.; Gurrola, E. M.; Sacco, G.; Zebker, H. A.

    2011-12-01

    The Interferometric Synthetic Aperture Radar (InSAR) Scientific Computing Environment (ISCE) is a software development effort in its third and final year within the NASA Advanced Information Systems and Technology program. The ISCE is a new computing environment for geodetic image processing for InSAR sensors enabling scientists to reduce measurements directly from radar satellites to new geophysical products with relative ease. The environment can serve as the core of a centralized processing center to bring Level-0 raw radar data up to Level-3 data products, but is adaptable to alternative processing approaches for science users interested in new and different ways to exploit mission data. Upcoming international SAR missions will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. The InSAR Scientific Computing Environment has the functionality to become a key element in processing data from NASA's proposed DESDynI mission into higher level data products, supporting a new class of analyses that take advantage of the long time and large spatial scales of these new data. At the core of ISCE is a new set of efficient and accurate InSAR algorithms. These algorithms are placed into an object-oriented, flexible, extensible software package that is informed by modern programming methods, including rigorous componentization of processing codes, abstraction and generalization of data models. The environment is designed to easily allow user contributions, enabling an open source community to extend the framework into the indefinite future. ISCE supports data from nearly all of the available satellite platforms, including ERS, EnviSAT, Radarsat-1, Radarsat-2, ALOS, TerraSAR-X, and Cosmo-SkyMed. The code applies a number of parallelization techniques and sensible approximations for speed. It is configured to work on modern linux-based computers with gcc compilers and python

  9. Applying DNA computation to intractable problems in social network analysis.

    Science.gov (United States)

    Chen, Rick C S; Yang, Stephen J H

    2010-09-01

    From ancient times to the present day, social networks have played an important role in the formation of various organizations for a range of social behaviors. As such, social networks inherently describe the complicated relationships between elements around the world. Based on mathematical graph theory, social network analysis (SNA) has been developed in and applied to various fields such as Web 2.0 for Web applications and product developments in industries, etc. However, some definitions of SNA, such as finding a clique, N-clique, N-clan, N-club and K-plex, are NP-complete problems, which are not easily solved via traditional computer architecture. These challenges have restricted the uses of SNA. This paper provides DNA-computing-based approaches with inherently high information density and massive parallelism. Using these approaches, we aim to solve the three primary problems of social networks: N-clique, N-clan, and N-club. Their accuracy and feasible time complexities discussed in the paper will demonstrate that DNA computing can be used to facilitate the development of SNA. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  10. An environment-dependent transcriptional network specifies human microglia identity.

    Science.gov (United States)

    Gosselin, David; Skola, Dylan; Coufal, Nicole G; Holtman, Inge R; Schlachetzki, Johannes C M; Sajti, Eniko; Jaeger, Baptiste N; O'Connor, Carolyn; Fitzpatrick, Conor; Pasillas, Martina P; Pena, Monique; Adair, Amy; Gonda, David D; Levy, Michael L; Ransohoff, Richard M; Gage, Fred H; Glass, Christopher K

    2017-06-23

    Microglia play essential roles in central nervous system (CNS) homeostasis and influence diverse aspects of neuronal function. However, the transcriptional mechanisms that specify human microglia phenotypes are largely unknown. We examined the transcriptomes and epigenetic landscapes of human microglia isolated from surgically resected brain tissue ex vivo and after transition to an in vitro environment. Transfer to a tissue culture environment resulted in rapid and extensive down-regulation of microglia-specific genes that were induced in primitive mouse macrophages after migration into the fetal brain. Substantial subsets of these genes exhibited altered expression in neurodegenerative and behavioral diseases and were associated with noncoding risk variants. These findings reveal an environment-dependent transcriptional network specifying microglia-specific programs of gene expression and facilitate efforts to understand the roles of microglia in human brain diseases. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  11. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  13. Analysis of Intrusion Detection and Attack Proliferation in Computer Networks

    Science.gov (United States)

    Rangan, Prahalad; Knuth, Kevin H.

    2007-11-01

    One of the popular models to describe computer worm propagation is the Susceptible-Infected (SI) model [1]. This model of worm propagation has been implemented on the simulation toolkit Network Simulator v2 (ns-2) [2]. The ns-2 toolkit has the capability to simulate networks of different topologies. The topology studied in this work, however, is that of a simple star-topology. This work introduces our initial efforts to learn the relevant quantities describing an infection given synthetic data obtained from running the ns-2 worm model. We aim to use Bayesian methods to gain a predictive understanding of how computer infections spread in real world network topologies. This understanding would greatly reinforce dissemination of targeted immunization strategies, which may prevent real-world epidemics. The data consist of reports of infection from a subset of nodes in a large network during an attack. The infection equation obtained from [1] enables us to derive a likelihood function for the infection reports. This prior information can be used in the Bayesian framework to obtain the posterior probabilities for network properties of interest, such as the rate at which nodes contact one another (also referred to as contact rate or scan rate). Our preliminary analyses indicate an effective spread rate of only 1/5th the actual scan rate used for a star-type of topology. This implies that as the population becomes saturated with infected nodes the actual spread rate will become much less than the scan rate used in the simulation.

  14. Information Sharing Mechanism among Mobile Agents In Ad-hoc Network Environment and Its Applications

    Directory of Open Access Journals (Sweden)

    Kunio Umetsuji

    2004-12-01

    Full Text Available Mobile agents are programs that can move from one site to another in a network with their data and states. Mobile agents are expected to be an essential tool in pervasive computing. In multi platform environment, it is important to communicate with mobile agents only using their universal or logical name not using their physical locations. More, in an ad-hoc network environment, an agent can migrate autonomously and communicate with other agents on demand. It is difficult that mobile agent grasps the position information on other agents correctly each other, because mobile agent processes a task while moving a network successively. In order to realize on-demand mutual communication among mobile agents without any centralized servers, we propose a new information sharing mechanism within mobile agents. In this paper, we present a new information sharing mechanism within mobile agents. The method is a complete peer based and requires no agent servers to manage mobile agent locations. Therefore, a mobile agent can get another mobile agent, communicate with it and shares information stored in the agent without any knowledge of the location of the target mobile agent. The basic idea of the mechanism is an introduction of Agent Ring, Agent Chain and Shadow Agent. With this mechanism, each agent can communicate with other agents in a server-less environment, which is suitable for ad-hoc agent network and an agent system can manage agents search and communications efficiently.

  15. Condor-COPASI: high-throughput computing for biochemical networks

    Directory of Open Access Journals (Sweden)

    Kent Edward

    2012-07-01

    Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.

  16. Computers and networks in the age of globalization

    DEFF Research Database (Denmark)

    Bloch Rasmussen, Leif; Beardon, Colin; Munari, Silvio

    In modernity, an individual identity was constituted from civil society, while in a globalized network society, human identity, if it develops at all, must grow from communal resistance. A communal resistance to an abstract conceptualized world, where there is no possibility for perception...... in a network society; the individual and knowledge-based organizations; human responsibility and technology; and exclusion and regeneration. This volume contains the edited proceedings of the Fifth World Conference on Human Choice and Computers (HCC-5), which was sponsored by the International Federation...

  17. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  18. Smart photonic networks and computer security for image data

    Science.gov (United States)

    Campello, Jorge; Gill, John T.; Morf, Martin; Flynn, Michael J.

    1998-02-01

    Work reported here is part of a larger project on 'Smart Photonic Networks and Computer Security for Image Data', studying the interactions of coding and security, switching architecture simulations, and basic technologies. Coding and security: coding methods that are appropriate for data security in data fusion networks were investigated. These networks have several characteristics that distinguish them form other currently employed networks, such as Ethernet LANs or the Internet. The most significant characteristics are very high maximum data rates; predominance of image data; narrowcasting - transmission of data form one source to a designated set of receivers; data fusion - combining related data from several sources; simple sensor nodes with limited buffering. These characteristics affect both the lower level network design and the higher level coding methods.Data security encompasses privacy, integrity, reliability, and availability. Privacy, integrity, and reliability can be provided through encryption and coding for error detection and correction. Availability is primarily a network issue; network nodes must be protected against failure or routed around in the case of failure. One of the more promising techniques is the use of 'secret sharing'. We consider this method as a special case of our new space-time code diversity based algorithms for secure communication. These algorithms enable us to exploit parallelism and scalable multiplexing schemes to build photonic network architectures. A number of very high-speed switching and routing architectures and their relationships with very high performance processor architectures were studied. Indications are that routers for very high speed photonic networks can be designed using the very robust and distributed TCP/IP protocol, if suitable processor architecture support is available.

  19. MUPBED - interworking challenges in a multi-domain and multi-technology network environment

    DEFF Research Database (Denmark)

    Foisel, Hans-Martin; Spaeth, Jan; Cavazzoni, Carlo

    2007-01-01

    Todays data transport networks are evolving continuously towards customer oriented and application aware networks. This evolution happens in Europe in a highly diverse network environment, covering multiple network domains, layers, technologies, control and management approaches. In this paper......, the issues, challenges and the solutions developed in the IST project MUPBED (,,Multi-Partner European Test Beds for Research Networking; www.ist-mupbed.eu) for seamless interworking in a typical European heterogeneous network environment are described, addressing horizontal, interdomain, and vertical, inter...

  20. Collaborative editing within the pervasive collaborative computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Perry, Marcia [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Agarwal, Deb [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2003-09-11

    Scientific collaborations are established for a wide variety of tasks for which several communication modes are necessary, including messaging, file-sharing, and collaborative editing. In this position paper, we describe our work on the Pervasive Collaborative Computing Environment (PCCE) which aims to facilitate scientific collaboration within widely distributed environments. The PCCE provides a persistent space in which collaborators can locate each other, exchange messages synchronously and asynchronously and archive conversations. Our current interest is in exploring research and development of shared editing systems with the goal of integrating this technology into the PCCE. We hope to inspire discussion of technology solutions for an integrated approach to synchronous and asynchronous communication and collaborative editing.

  1. Computers and networks in the age of globalization

    DEFF Research Database (Denmark)

    Bloch Rasmussen, Leif; Beardon, Colin; Munari, Silvio

    in a network society; the individual and knowledge-based organizations; human responsibility and technology; and exclusion and regeneration. This volume contains the edited proceedings of the Fifth World Conference on Human Choice and Computers (HCC-5), which was sponsored by the International Federation...... for Information Processing (IFIP) and held in Geneva, Switzerland in August 1998. Since the first HCC conference in 1974, IFIP's Technical Committee 9 has endeavoured to set the agenda for human choices and human actions vis-a-vis computers....

  2. Computer, Network, Software, and Hardware Engineering with Applications

    CERN Document Server

    Schneidewind, Norman F

    2012-01-01

    There are many books on computers, networks, and software engineering but none that integrate the three with applications. Integration is important because, increasingly, software dominates the performance, reliability, maintainability, and availability of complex computer and systems. Books on software engineering typically portray software as if it exists in a vacuum with no relationship to the wider system. This is wrong because a system is more than software. It is comprised of people, organizations, processes, hardware, and software. All of these components must be considered in an integr

  3. Partially blind instantly decodable network codes for lossy feedback environment

    KAUST Repository

    Sorour, Sameh

    2014-09-01

    In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. When feedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packet combinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend the perfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations. For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partially blind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.

  4. A Generic Context Management Framework for Personal Networking Environments

    DEFF Research Database (Denmark)

    Sanchez, Luis; Olsen, Rasmus Løvenstein; Bauer, Martin

    2006-01-01

    In this paper we introduce a high level architecture for a context management system for Personal Networks (PN). The main objective of the Context Management Framework (CMF) described in this paper is to support the interactions between context information sources and context aware components...... on their computational capabilities and their role within the system. We differentiate between Basic Context Nodes (BCN), Enhanced Context Nodes (ECN) and Context Management Nodes (CMN) within the CMF. CMNs operate on two levels, i.e., local/cluster level and PN level. In the paper we also describe how these entities...

  5. Supporting tactical intelligence using collaborative environments and social networking

    Science.gov (United States)

    Wollocko, Arthur B.; Farry, Michael P.; Stark, Robert F.

    2013-05-01

    Modern military environments place an increased emphasis on the collection and analysis of intelligence at the tactical level. The deployment of analytical tools at the tactical level helps support the Warfighter's need for rapid collection, analysis, and dissemination of intelligence. However, given the lack of experience and staffing at the tactical level, most of the available intelligence is not exploited. Tactical environments are staffed by a new generation of intelligence analysts who are well-versed in modern collaboration environments and social networking. An opportunity exists to enhance tactical intelligence analysis by exploiting these personnel strengths, but is dependent on appropriately designed information sharing technologies. Existing social information sharing technologies enable users to publish information quickly, but do not unite or organize information in a manner that effectively supports intelligence analysis. In this paper, we present an alternative approach to structuring and supporting tactical intelligence analysis that combines the benefits of existing concepts, and provide detail on a prototype system embodying that approach. Since this approach employs familiar collaboration support concepts from social media, it enables new-generation analysts to identify the decision-relevant data scattered among databases and the mental models of other personnel, increasing the timeliness of collaborative analysis. Also, the approach enables analysts to collaborate visually to associate heterogeneous and uncertain data within the intelligence analysis process, increasing the robustness of collaborative analyses. Utilizing this familiar dynamic collaboration environment, we hope to achieve a significant reduction of time and skill required to glean actionable intelligence in these challenging operational environments.

  6. Multi-objective optimization in computer networks using metaheuristics

    CERN Document Server

    Donoso, Yezid

    2007-01-01

    Metaheuristics are widely used to solve important practical combinatorial optimization problems. Many new multicast applications emerging from the Internet-such as TV over the Internet, radio over the Internet, and multipoint video streaming-require reduced bandwidth consumption, end-to-end delay, and packet loss ratio. It is necessary to design and to provide for these kinds of applications as well as for those resources necessary for functionality. Multi-Objective Optimization in Computer Networks Using Metaheuristics provides a solution to the multi-objective problem in routing computer networks. It analyzes layer 3 (IP), layer 2 (MPLS), and layer 1 (GMPLS and wireless functions). In particular, it assesses basic optimization concepts, as well as several techniques and algorithms for the search of minimals; examines the basic multi-objective optimization concepts and the way to solve them through traditional techniques and through several metaheuristics; and demonstrates how to analytically model the compu...

  7. Advances in neural networks computational intelligence for ICT

    CERN Document Server

    Esposito, Anna; Morabito, Francesco; Pasero, Eros

    2016-01-01

    This carefully edited book is putting emphasis on computational and artificial intelligent methods for learning and their relative applications in robotics, embedded systems, and ICT interfaces for psychological and neurological diseases. The book is a follow-up of the scientific workshop on Neural Networks (WIRN 2015) held in Vietri sul Mare, Italy, from the 20th to the 22nd of May 2015. The workshop, at its 27th edition became a traditional scientific event that brought together scientists from many countries, and several scientific disciplines. Each chapter is an extended version of the original contribution presented at the workshop, and together with the reviewers’ peer revisions it also benefits from the live discussion during the presentation. The content of book is organized in the following sections. 1. Introduction, 2. Machine Learning, 3. Artificial Neural Networks: Algorithms and models, 4. Intelligent Cyberphysical and Embedded System, 5. Computational Intelligence Methods for Biomedical ICT in...

  8. Computers and networks in the age of globalization

    DEFF Research Database (Denmark)

    Bloch Rasmussen, Leif; Beardon, Colin; Munari, Silvio

    their lives in a diversity of social and cultural contexts. In so doing, the book tries to imagine in what kind of networks humans may choose and act based on the knowledge and empirical evidence presented in the papers. The topics covered in the book include: people and their changing values; citizens...... in a network society; the individual and knowledge-based organizations; human responsibility and technology; and exclusion and regeneration. This volume contains the edited proceedings of the Fifth World Conference on Human Choice and Computers (HCC-5), which was sponsored by the International Federation...... for Information Processing (IFIP) and held in Geneva, Switzerland in August 1998. Since the first HCC conference in 1974, IFIP's Technical Committee 9 has endeavoured to set the agenda for human choices and human actions vis-a-vis computers....

  9. Biological networks 101: computational modeling for molecular biologists.

    Science.gov (United States)

    Scholma, Jetse; Schivo, Stefano; Urquidi Camacho, Ricardo A; van de Pol, Jaco; Karperien, Marcel; Post, Janine N

    2014-01-01

    Computational modeling of biological networks permits the comprehensive analysis of cells and tissues to define molecular phenotypes and novel hypotheses. Although a large number of software tools have been developed, the versatility of these tools is limited by mathematical complexities that prevent their broad adoption and effective use by molecular biologists. This study clarifies the basic aspects of molecular modeling, how to convert data into useful input, as well as the number of time points and molecular parameters that should be considered for molecular regulatory models with both explanatory and predictive potential. We illustrate the necessary experimental preconditions for converting data into a computational model of network dynamics. This model requires neither a thorough background in mathematics nor precise data on intracellular concentrations, binding affinities or reaction kinetics. Finally, we show how an interactive model of crosstalk between signal transduction pathways in primary human articular chondrocytes allows insight into processes that regulate gene expression. © 2013 Elsevier B.V. All rights reserved.

  10. Computational study of noise in a large signal transduction network

    Directory of Open Access Journals (Sweden)

    Ruohonen Keijo

    2011-06-01

    Full Text Available Abstract Background Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. Results We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. Conclusions We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies.

  11. Computational tools for large-scale biological network analysis

    OpenAIRE

    Pinto, José Pedro Basto Gouveia Pereira

    2012-01-01

    Tese de doutoramento em Informática The surge of the field of Bioinformatics, among other contributions, provided biological researchers with powerful computational methods for processing and analysing the large amount of data coming from recent biological experimental techniques such as genome sequencing and other omics. Naturally, this led to the opening of new avenues of biological research among which is included the analysis of large-scale biological networks. The an...

  12. Computers and networks in the age of globalization

    DEFF Research Database (Denmark)

    Bloch Rasmussen, Leif; Beardon, Colin; Munari, Silvio

    In modernity, an individual identity was constituted from civil society, while in a globalized network society, human identity, if it develops at all, must grow from communal resistance. A communal resistance to an abstract conceptualized world, where there is no possibility for perception...... their lives in a diversity of social and cultural contexts. In so doing, the book tries to imagine in what kind of networks humans may choose and act based on the knowledge and empirical evidence presented in the papers. The topics covered in the book include: people and their changing values; citizens...... in a network society; the individual and knowledge-based organizations; human responsibility and technology; and exclusion and regeneration. This volume contains the edited proceedings of the Fifth World Conference on Human Choice and Computers (HCC-5), which was sponsored by the International Federation...

  13. Computer simulation of randomly cross-linked polymer networks

    CERN Document Server

    Williams, T P

    2002-01-01

    In this work, Monte Carlo and Stochastic Dynamics computer simulations of mesoscale model randomly cross-linked networks were undertaken. Task parallel implementations of the lattice Monte Carlo Bond Fluctuation model and Kremer-Grest Stochastic Dynamics bead-spring continuum model were designed and used for this purpose. Lattice and continuum precursor melt systems were prepared and then cross-linked to varying degrees. The resultant networks were used to study structural changes during deformation and relaxation dynamics. The effects of a random network topology featuring a polydisperse distribution of strand lengths and an abundance of pendant chain ends, were qualitatively compared to recent published work. A preliminary investigation into the effects of temperature on the structural and dynamical properties was also undertaken. Structural changes during isotropic swelling and uniaxial deformation, revealed a pronounced non-affine deformation dependant on the degree of cross-linking. Fractal heterogeneiti...

  14. Preserving access to ALEPH computing environment via virtual machines

    Science.gov (United States)

    Coscetti, Simone; Boccali, Tommaso; Maggi, Marcello; Arezzini, Silvia

    2014-06-01

    The ALEPH Collaboration [1] took data at the LEP (CERN) electron-positron collider in the period 1989-2000, producing more than 300 scientific papers. While most of the Collaboration activities stopped in the last years, the data collected still has physics potential, with new theoretical models emerging, which ask checks with data at the Z and WW production energies. An attempt to revive and preserve the ALEPH Computing Environment is presented; the aim is not only the preservation of the data files (usually called bit preservation), but of the full environment a physicist would need to perform brand new analyses. Technically, a Virtual Machine approach has been chosen, using the VirtualBox platform. Concerning simulated events, the full chain from event generators to physics plots is possible, and reprocessing of data events is also functioning. Interactive tools like the DALI event display can be used on both data and simulated events. The Virtual Machine approach is suited for both interactive usage, and for massive computing using Cloud like approaches.

  15. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  16. A Study of the Impact of Virtualization on the Computer Networks

    OpenAIRE

    Timalsena, Pratik

    2013-01-01

    Virtualization is an imminent sector of the Information and Technology in the peresent world. It is advancing and being popuraly implemented world wide. Computer network is not isolated from the global impact of the virtualization. The virtualization is being deployed on the computer networks in a great extent. In general, virtualization is an inevitable tool for computer networks. This report presents a surfacial idea about the impact of the virtualization on the computer network. The report...

  17. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  18. A Multipurpose Key Agreement Scheme in Ubiquitous Computing Environments

    Directory of Open Access Journals (Sweden)

    Chin-Chen Chang

    2015-01-01

    Full Text Available Due to the rapid advancement of cryptographic techniques, the smart card has recently become a popular device because it is capable of storing and computing essential information with such properties as tamper resistance. However, many service providers must satisfy the user’s desire to be able to access services anytime and anywhere with the smart card computing devices. Therefore, multipurpose smart cards have become very popular identification tokens. In 2011, Wang et al. proposed an authentication and key agreement scheme for smart card use. Even so, two drawbacks still exist; that is, (1 the security requirement of mutual authentication has not been satisfied and (2 the authentication scheme cannot be used for multipurpose smart cards. In this paper, we propose an efficient and secure multipurpose, authenticated, key agreement scheme in which the user is required to register only once and can be authenticated without any registration center. Furthermore, the proposed scheme can be used in ubiquitous environments because of its low computation and communication overhead.

  19. Scholarly information discovery in the networked academic learning environment

    CERN Document Server

    Li, LiLi

    2014-01-01

    In the dynamic and interactive academic learning environment, students are required to have qualified information literacy competencies while critically reviewing print and electronic information. However, many undergraduates encounter difficulties in searching peer-reviewed information resources. Scholarly Information Discovery in the Networked Academic Learning Environment is a practical guide for students determined to improve their academic performance and career development in the digital age. Also written with academic instructors and librarians in mind who need to show their students how to access and search academic information resources and services, the book serves as a reference to promote information literacy instructions. This title consists of four parts, with chapters on the search for online and printed information via current academic information resources and services: part one examines understanding information and information literacy; part two looks at academic information delivery in the...

  20. Social Networks as Learning Environments for Higher Education

    Directory of Open Access Journals (Sweden)

    J.A.Cortés

    2014-09-01

    Full Text Available Learning is considered as a social activity, a student does not learn only of the teacher and the textbook or only in the classroom, learn also from many other agents related to the media, peers and society in general. And since the explosion of the Internet, the information is within the reach of everyone, is there where the main area of opportunity in new technologies applied to education, as well as taking advantage of recent socialization trends that can be leveraged to improve not only informing of their daily practices, but rather as a tool that explore different branches of education research. One can foresee the future of higher education as a social learning environment, open and collaborative, where people construct knowledge in interaction with others, in a comprehensive manner. The mobility and ubiquity that provide mobile devices enable the connection from anywhere and at any time. In modern educational environments can be expected to facilitate mobile devices in the classroom expansion in digital environments, so that students and teachers can build the teaching-learning process collectively, this partial derivative results in the development of draft research approved by the CONADI in “Universidad Cooperativa de Colombia”, "Social Networks: A teaching strategy in learning environments in higher education."

  1. Application of artificial neural networks in computer-aided diagnosis.

    Science.gov (United States)

    Liu, Bei

    2015-01-01

    Computer-aided diagnosis is a diagnostic procedure in which a radiologist uses the outputs of computer analysis of medical images as a second opinion in the interpretation of medical images, either to help with lesion detection or to help determine if the lesion is benign or malignant. Artificial neural networks (ANNs) are usually employed to formulate the statistical models for computer analysis. Receiver operating characteristic curves are used to evaluate the performance of the ANN alone, as well as the diagnostic performance of radiologists who take into account the ANN output as a second opinion. In this chapter, we use mammograms to illustrate how an ANN model is trained, tested, and evaluated, and how a radiologist should use the ANN output as a second opinion in CAD.

  2. SNP by SNP by environment interaction network of alcoholism.

    Science.gov (United States)

    Zollanvari, Amin; Alterovitz, Gil

    2017-03-14

    Alcoholism has a strong genetic component. Twin studies have demonstrated the heritability of a large proportion of phenotypic variance of alcoholism ranging from 50-80%. The search for genetic variants associated with this complex behavior has epitomized sequence-based studies for nearly a decade. The limited success of genome-wide association studies (GWAS), possibly precipitated by the polygenic nature of complex traits and behaviors, however, has demonstrated the need for novel, multivariate models capable of quantitatively capturing interactions between a host of genetic variants and their association with non-genetic factors. In this regard, capturing the network of SNP by SNP or SNP by environment interactions has recently gained much interest. Here, we assessed 3,776 individuals to construct a network capable of detecting and quantifying the interactions within and between plausible genetic and environmental factors of alcoholism. In this regard, we propose the use of first-order dependence tree of maximum weight as a potential statistical learning technique to delineate the pattern of dependencies underpinning such a complex trait. Using a predictive based analysis, we further rank the genes, demographic factors, biological pathways, and the interactions represented by our SNP [Formula: see text]SNP[Formula: see text]E network. The proposed framework is quite general and can be potentially applied to the study of other complex traits.

  3. Wireless local area network in a prehospital environment

    Directory of Open Access Journals (Sweden)

    Grimes Gary J

    2004-08-01

    Full Text Available Abstract Background Wireless local area networks (WLANs are considered the next generation of clinical data network. They open the possibility for capturing clinical data in a prehospital setting (e.g., a patient's home using various devices, such as personal digital assistants, laptops, digital electrocardiogram (EKG machines, and even cellular phones, and transmitting the captured data to a physician or hospital. The transmission rate is crucial to the applicability of the technology in the prehospital setting. Methods We created two separate WLANs to simulate a virtual local are network environment such as in a patient's home or an emergency room (ER. The effects of different methods of data transmission, number of clients, and roaming among different access points on the file transfer rate were determined. Results The present results suggest that it is feasible to transfer small files such as patient demographics and EKG data from the patient's home to the ER at a reasonable speed. Encryption, user control, and access control were implemented and results discussed. Conclusions Implementing a WLAN in a centrally managed and multiple-layer-controlled access control server is the key to ensuring its security and accessibility. Future studies should focus on product capacity, speed, compatibility, interoperability, and security management.

  4. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    Directory of Open Access Journals (Sweden)

    Jose M. Moya

    2012-08-01

    Full Text Available Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  5. Scheduling Method of Data-Intensive Applications in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Xiong Fu

    2015-01-01

    Full Text Available The virtualization of cloud computing improves the utilization of resources and energy. And a cloud user can deploy his/her own applications and related data on a pay-as-you-go basis. The communications between an application and a data storage node, as well as within the application, have a great impact on the execution efficiency of the application. The locations of subtasks of an application and the data that transferred between the subtasks are the main reason why communication delay exists. The communication delay can affect the completion time of the application. In this paper, we take into account the data transmission time and communications between subtasks and propose a heuristic optimal virtual machine (VM placement algorithm. Related simulations demonstrate that this algorithm can reduce the completion time of user tasks and ensure the feasibility and effectiveness of the overall network performance of applications when running in a cloud computing environment.

  6. Emergence of microbial networks as response to hostile environments.

    Science.gov (United States)

    Madeo, Dario; Comolli, Luis R; Mocenni, Chiara

    2014-01-01

    The majority of microorganisms live in complex communities under varying conditions. One pivotal question in evolutionary biology is the emergence of cooperative traits and their sustainment in altered environments or in the presence of free-riders. Co-occurrence patterns in the spatial distribution of biofilms can help define species' identities, and systems biology tools are revealing networks of interacting microorganisms. However, networks of inter-dependencies involving micro-organisms in the planktonic phase may be just as important, with the added complexity that they are not bounded in space. An integrated approach linking imaging, "Omics" and modeling has the potential to enable new hypothesis and working models. In order to understand how cooperation can emerge and be maintained without abilities like memory or recognition we use evolutionary game theory as the natural framework to model cell-cell interactions arising from evolutive decisions. We consider a finite population distributed in a spatial domain (biofilm), and divided into two interacting classes with different traits. This interaction can be weighted by distance, and produces physical connections between two elements allowing them to exchange finite amounts of energy and matter. Available strategies to each individual of one class in the population are the propensities or "willingness" to connect any individual of the other class. Following evolutionary game theory, we propose a mathematical model which explains the patterns of connections which emerge when individuals are able to find connection strategies that asymptotically optimize their fitness. The process explains the formation of a network for efficiently exchanging energy and matter among individuals and thus ensuring their survival in hostile environments.

  7. Emergence of microbial networks as response to hostile environments

    Directory of Open Access Journals (Sweden)

    Dario eMadeo

    2014-08-01

    Full Text Available The majority of microorganisms live in complex communities under varying conditions. One pivotal question in evolutionary biology is the emergence of cooperative traits and their sustainment in altered environments or in the presence of free-riders. Co-occurrence patterns in the spatial distribution of biofilms can help define species' identities, and systems biology tools are revealing networks of interacting microorganisms. However, networks of inter-dependencies involving micro-organisms in the planktonic phase may be just as important, with the added complexity that they are not bounded in space. An integrated approach linking imaging, ``Omics'' and modeling has the potential to enable new hypothesis and working models. In order to understand how cooperation can emerge and be maintained without abilities like memory or recognition we use evolutionary game theory as the natural framework to model cell-cell interactions arising from evolutive decisions. We consider a finite population distributed in a spatial domain (biofilm, and divided into two interacting classes with different traits. This interaction can be weighted by distance, and produces physical connections between two elements allowing them to exchange finite amounts of energy and matter. Available strategies to each individual of one class in the population are the propensities or ``willingness'' to connect any individual of the other class. Following evolutionary game theory, we propose a mathematical model which explains the patterns of connections which emerge when individuals are able to find connection strategies that asymptotically optimize their fitness. The process explains the formation of a network for efficiently exchanging energy and matter among individuals and thus ensuring their survival in hostile environments.

  8. Line-plane broadcasting in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-06-08

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  9. Messenger in The Barn: networking in a learning environment

    Directory of Open Access Journals (Sweden)

    Malcolm Rutter

    2009-12-01

    Full Text Available This case study describes the use of a synchronous communication application (MSN Messenger in a large academic computing environment. It draws on data from interviews, questionnaires and student marks to examine the link between use of the application and success measured through module marks. The relationship is not simple. Total abstainers and heavy users come out best, while medium level users do less well, indicating the influence of two factors. The discussion section suggests possible factors. The study also highlights the benefits of support and efficiency of communication that the application brings. Although there have been many studies of synchronous communication tool use in the office and in social life, this is one of the first to examine its informal use in an academic environment.

  10. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  11. Cloud-Centric and Logically Isolated Virtual Network Environment Based on Software-Defined Wide Area Network

    Directory of Open Access Journals (Sweden)

    Dongkyun Kim

    2017-12-01

    Full Text Available Recent development of distributed cloud environments requires advanced network infrastructure in order to facilitate network automation, virtualization, high performance data transfer, and secured access of end-to-end resources across regional boundaries. In order to meet these innovative cloud networking requirements, software-defined wide area network (SD-WAN is primarily demanded to converge distributed cloud resources (e.g., virtual machines (VMs in a programmable and intelligent manner over distant networks. Therefore, this paper proposes a logically isolated networking scheme designed to integrate distributed cloud resources to dynamic and on-demand virtual networking over SD-WAN. The performance evaluation and experimental results of the proposed scheme indicate that virtual network convergence time is minimized in two different network models such as: (1 an operating OpenFlow-oriented SD-WAN infrastructure (KREONET-S which is deployed on the advanced national research network in Korea, and (2 Mininet-based experimental and emulated networks.

  12. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  13. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Directory of Open Access Journals (Sweden)

    Bruno Guazzelli Batista

    Full Text Available Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  14. The Worldviews Network: Transformative Global Change Education in Immersive Environments

    Science.gov (United States)

    Hamilton, H.; Yu, K. C.; Gardiner, N.; McConville, D.; Connolly, R.; "Irving, Lindsay", L. S.

    2011-12-01

    Our modern age is defined by an astounding capacity to generate scientific information. From DNA to dark matter, human ingenuity and technologies create an endless stream of data about ourselves and the world of which we are a part. Yet we largely founder in transforming information into understanding, and understanding into rational action for our society as a whole. Earth and biodiversity scientists are especially frustrated by this impasse because the data they gather often point to a clash between Earth's capacity to sustain life and the decisions that humans make to garner the planet's resources. Immersive virtual environments offer an underexplored link in the translation of scientific data into public understanding, dialogue, and action. The Worldviews Network is a collaboration of scientists, artists, and educators focused on developing best practices for the use of immersive environments for science-based ecological literacy education. A central tenet of the Worldviews Network is that there are multiple ways to know and experience the world, so we are developing scientifically accurate, geographically relevant, and culturally appropriate programming to promote ecological literacy within informal science education programs across the United States. The goal of Worldviews Network is to offer transformative learning experiences, in which participants are guided on a process integrating immersive visual explorations, critical reflection and dialogue, and design-oriented approaches to action - or more simply, seeing, knowing, and doing. Our methods center on live presentations, interactive scientific visualizations, and sustainability dialogues hosted at informal science institutions. Our approach uses datasets from the life, Earth, and space sciences to illuminate the complex conditions that support life on earth and the ways in which ecological systems interact. We are leveraging scientific data from federal agencies, non-governmental organizations, and our

  15. Applications of the pipeline environment for visual informatics and genomics computations

    Directory of Open Access Journals (Sweden)

    Genco Alex

    2011-07-01

    Full Text Available Abstract Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The

  16. [Forensic evidence-based medicine in computer communication networks].

    Science.gov (United States)

    Qiu, Yun-Liang; Peng, Ming-Qi

    2013-12-01

    As an important component of judicial expertise, forensic science is broad and highly specialized. With development of network technology, increasement of information resources, and improvement of people's legal consciousness, forensic scientists encounter many new problems, and have been required to meet higher evidentiary standards in litigation. In view of this, evidence-based concept should be established in forensic medicine. We should find the most suitable method in forensic science field and other related area to solve specific problems in the evidence-based mode. Evidence-based practice can solve the problems in legal medical field, and it will play a great role in promoting the progress and development of forensic science. This article reviews the basic theory of evidence-based medicine and its effect, way, method, and evaluation in the forensic medicine in order to discuss the application value of forensic evidence-based medicine in computer communication networks.

  17. Computational analysis of protein interaction networks for infectious diseases.

    Science.gov (United States)

    Pan, Archana; Lahiri, Chandrajit; Rajendiran, Anjana; Shanmugham, Buvaneswari

    2016-05-01

    Infectious diseases caused by pathogens, including viruses, bacteria and parasites, pose a serious threat to human health worldwide. Frequent changes in the pattern of infection mechanisms and the emergence of multidrug-resistant strains among pathogens have weakened the current treatment regimen. This necessitates the development of new therapeutic interventions to prevent and control such diseases. To cater to the need, analysis of protein interaction networks (PINs) has gained importance as one of the promising strategies. The present review aims to discuss various computational approaches to analyse the PINs in context to infectious diseases. Topology and modularity analysis of the network with their biological relevance, and the scenario till date about host-pathogen and intra-pathogenic protein interaction studies were delineated. This would provide useful insights to the research community, thereby enabling them to design novel biomedicine against such infectious diseases. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  18. Reducing Computational Overhead of Network Coding with Intrinsic Information Conveying

    DEFF Research Database (Denmark)

    Heide, Janus; Zhang, Qi; Pedersen, Morten V.

    This paper investigated the possibility of intrinsic information conveying in network coding systems. The information is embedded into the coding vector by constructing the vector based on a set of predefined rules. This information can subsequently be retrieved by any receiver. The starting point...... is RLNC (Random Linear Network Coding) and the goal is to reduce the amount of coding operations both at the coding and decoding node, and at the same time remove the need for dedicated signaling messages. In a traditional RLNC system, coding operation takes up significant computational resources and adds...... to the overall energy consumption, which is particular problematic for mobile battery-driven devices. In RLNC coding is performed over a FF (Finite Field). We propose to divide this field into sub fields, and let each sub field signify some information or state. In order to embed the information correctly...

  19. Some key considerations in evolving a computer system and software engineering support environment for the space station program

    Science.gov (United States)

    Mckay, C. W.; Bown, R. L.

    1985-01-01

    The space station data management system involves networks of computing resources that must work cooperatively and reliably over an indefinite life span. This program requires a long schedule of modular growth and an even longer period of maintenance and operation. The development and operation of space station computing resources will involve a spectrum of systems and software life cycle activities distributed across a variety of hosts, an integration, verification, and validation host with test bed, and distributed targets. The requirement for the early establishment and use of an apporopriate Computer Systems and Software Engineering Support Environment is identified. This environment will support the Research and Development Productivity challenges presented by the space station computing system.

  20. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  1. Symbolic dynamics and computation in model gene networks.

    Science.gov (United States)

    Edwards, R.; Siegelmann, H. T.; Aziza, K.; Glass, L.

    2001-03-01

    We analyze a class of ordinary differential equations representing a simplified model of a genetic network. In this network, the model genes control the production rates of other genes by a logical function. The dynamics in these equations are represented by a directed graph on an n-dimensional hypercube (n-cube) in which each edge is directed in a unique orientation. The vertices of the n-cube correspond to orthants of state space, and the edges correspond to boundaries between adjacent orthants. The dynamics in these equations can be represented symbolically. Starting from a point on the boundary between neighboring orthants, the equation is integrated until the boundary is crossed for a second time. Each different cycle, corresponding to a different sequence of orthants that are traversed during the integration of the equation always starting on a boundary and ending the first time that same boundary is reached, generates a different letter of the alphabet. A word consists of a sequence of letters corresponding to a possible sequence of orthants that arise from integration of the equation starting and ending on the same boundary. The union of the words defines the language. Letters and words correspond to analytically computable Poincare maps of the equation. This formalism allows us to define bifurcations of chaotic dynamics of the differential equation that correspond to changes in the associated language. Qualitative knowledge about the dynamics found by integrating the equation can be used to help solve the inverse problem of determining the underlying network generating the dynamics. This work places the study of dynamics in genetic networks in a context comprising both nonlinear dynamics and the theory of computation. (c) 2001 American Institute of Physics.

  2. System architecture for ubiquitous live video streaming in university network environment

    CSIR Research Space (South Africa)

    Dludla, AG

    2013-09-01

    Full Text Available The recent growth of ubiquitous computing brings to the networking discipline new classes of home, campus, and mobile networks. This would enable education service providers to provide services to learners anywhere, anytime and not only through...

  3. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    Finding a location for a new facility such that the facility attracts the maximal number of customers is a challenging problem. Existing studies either model customers as static sites and thus do not consider customer movement, or they focus on theoretical aspects and do not provide solutions tha...... that adopt different approaches to computing the query. Algorithm AUG uses graph augmentation, and ITE uses iterative road-network partitioning. Empirical studies with real data sets demonstrate that the algorithms are capable of offering high performance in realistic settings....

  4. Representing the environment 3.0. Maps, models, networks.

    Directory of Open Access Journals (Sweden)

    Letizia Bollini

    2014-05-01

    Full Text Available Web 3.0 is changing the world we live and perceive the environment anthropomorphized, making a stratifation of levels of experience and mediated by the devices. If the urban landscape is designed, shaped and planned space, there is a social landscape that overwrite the territory of values, representations shared images, narratives of personal and collective history. Mobile technology introduces an additional parameter, a kind of non-place, which allows the coexistence of the here and elsewhere in an sort of digital landscape. The maps, mental models, the system of social networks become, then, the way to present, represented and represent themselves in a kind of ideal coring of the co-presence of levels of physical, cognitive and collective space.

  5. NML Computation Algorithms for Tree-Structured Multinomial Bayesian Networks

    Directory of Open Access Journals (Sweden)

    Kontkanen Petri

    2007-01-01

    Full Text Available Typical problems in bioinformatics involve large discrete datasets. Therefore, in order to apply statistical methods in such domains, it is important to develop efficient algorithms suitable for discrete data. The minimum description length (MDL principle is a theoretically well-founded, general framework for performing statistical inference. The mathematical formalization of MDL is based on the normalized maximum likelihood (NML distribution, which has several desirable theoretical properties. In the case of discrete data, straightforward computation of the NML distribution requires exponential time with respect to the sample size, since the definition involves a sum over all the possible data samples of a fixed size. In this paper, we first review some existing algorithms for efficient NML computation in the case of multinomial and naive Bayes model families. Then we proceed by extending these algorithms to more complex, tree-structured Bayesian networks.

  6. An efficient algorithm for computing fixed length attractors based on bounded model checking in synchronous Boolean networks with biochemical applications.

    Science.gov (United States)

    Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N

    2015-04-28

    Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.

  7. Performance Analysis of CDMA Wireless Sensor Networks in Shadowed Environment

    Directory of Open Access Journals (Sweden)

    U. DATTA

    2010-07-01

    Full Text Available This paper evaluates the performance of a CDMA based wireless sensor networks (WSN with layered architecture in terms of QoS like outage probability, bit error rate (BER, throughput, and delay, considering correlation amongst interferers in shadowed environment. Energy consumption for successful delivery of information is also evaluated at different channel conditions. Two kinds of interference namely multiple access interference (MAI and node interference (NI are considered and an infinite ARQ is assumed at link layer between a sending node and sink till successful transmission of packetized data. An appropriate analytical model of interference considering correlation amongst interferers, power control error and shadowing is developed for evaluating bit error rate (BER, packet error rate (PER, average number of retransmission for successful delivery of information, throughput, and delay in a single hop communication. A simple energy model is used for evaluating the consumption of energy for successful delivery of information between source and sink. We also evaluate the sink capacity of wireless CDMA sensor networks considering a threshold BER. Sink capacity is defined as the maximum number of sensor nodes transmitting concurrently to the sink and located within one hop layer. The impact of NI on the sink capacity is also indicated. The effects of node density, correlation amongst interferers and power control error (pce on performance of WSN are investigated. Analytical results are supported by simulation results.

  8. Combining MLP and Using Decision Tree in Order to Detect the Intrusion into Computer Networks

    OpenAIRE

    Saba Sedigh Rad; Alireza Zebarjad

    2013-01-01

    The security of computer networks has an important role in computer systems. The increasing use of computer networks results in penetration and destruction of systems by system operations. So, in order to keep the systems away from these hazards, it is essential to use the intrusion detection system (IDS). This intrusion detection is done in order to detect the illicit use and misuse and to avoid damages to the systems and computer networks by both the external and internal intruders. Intrusi...

  9. Developing a virtualised testbed environment in preparation for testing of network based attacks

    CSIR Research Space (South Africa)

    Van Heerden, RP

    2013-11-01

    Full Text Available the authors to reset the simulation environment before each test and mitigated against the damage that an attack potentially inflicts on the test network. Without simulated network traffic, the virtualised network was too sterile. This resulted in any network...

  10. DLTAP: A Network-efficient Scheduling Method for Distributed Deep Learning Workload in Containerized Cluster Environment

    Directory of Open Access Journals (Sweden)

    Qiao Wei

    2017-01-01

    Full Text Available Deep neural networks (DNNs have recently yielded strong results on a range of applications. Training these DNNs using a cluster of commodity machines is a promising approach since training is time consuming and compute-intensive. Furthermore, putting DNN tasks into containers of clusters would enable broader and easier deployment of DNN-based algorithms. Toward this end, this paper addresses the problem of scheduling DNN tasks in the containerized cluster environment. Efficiently scheduling data-parallel computation jobs like DNN over containerized clusters is critical for job performance, system throughput, and resource utilization. It becomes even more challenging with the complex workloads. We propose a scheduling method called Deep Learning Task Allocation Priority (DLTAP which performs scheduling decisions in a distributed manner, and each of scheduling decisions takes aggregation degree of parameter sever task and worker task into account, in particularly, to reduce cross-node network transmission traffic and, correspondingly, decrease the DNN training time. We evaluate the DLTAP scheduling method using a state-of-the-art distributed DNN training framework on 3 benchmarks. The results show that the proposed method can averagely reduce 12% cross-node network traffic, and decrease the DNN training time even with the cluster of low-end servers.

  11. Context-Aware Task Assignment in Ubiquitous Computing Environment - A Genetic Algorithm Based Approach

    NARCIS (Netherlands)

    Pawar, P.; Mei, H.; Widya, I.A.; van Beijnum, Bernhard J.F.; van Halteren, Aart

    2007-01-01

    With the advent of ubiquitous computing, a user is surrounded by a variety of devices including tiny sensor nodes, handheld mobile devices and powerful computers as well as diverse communication networks. In this networked society, the role of a human being is evolving from the data consumer to the

  12. Complex network problems in physics, computer science and biology

    Science.gov (United States)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe

  13. A Hitchhiker's Guide to the Turing Galaxy: on naming the age of the networked digital computer

    Directory of Open Access Journals (Sweden)

    GRASSMUCK, Volker

    2007-12-01

    Full Text Available The most commonly used name for our era is that of the `information society, which is a rather unexpressive and, strictly speaking, tautological term. The informatics & society scholar Wolfgang Coy, following the example of McLuhan`s Gutenberg Galaxy, has introduced the concept of the Turing Galaxy. The paper retraces the pre-history of the concept, its grounding in the fundamental breakthroughs of the British mathematician Alan M. Turing, the Turing Machine and the Turing Test, analyses the reception of the concept in a variety of fields of scholarship and asks for its value in the further debate on the knowledge environment of the networked computer.

  14. Implications of computer networking and the Internet for nurse education.

    Science.gov (United States)

    Ward, R

    1997-06-01

    This paper sets out the history of computer networking and its use in nursing and health care education, and places this in its wider historical and social context. The increasing availability and use of computer networks and the internet are producing a changing climate in education as well as in health care. Moves away from traditional face-to-face teaching with a campus institution to widely distributed interactive multimedia learning will affect the roles of students and teachers. The use of electronic mail, mailing lists and the World Wide Web are specifically considered, along with changes to library and information management skills, research methods, journal publication and the like. Issues about the quality, as well as quantity, of information available, are considered. As more and more organizations and institutions begin to use electronic communication methods, it becomes an increasingly important part of the curriculum at all levels, and may lead to fundamental changes in geographical and professional boundaries. A glossary of terms is provided for those not familiar with the technology, along with the contact details for mailing lists and World Wide Web pages mentioned.

  15. ComputerTown: A Do-It-Yourself Community Computer Project. [Computer Town, USA and Other Microcomputer Based Alternatives to Traditional Learning Environments].

    Science.gov (United States)

    Zamora, Ramon M.

    Alternative learning environments offering computer-related instruction are developing around the world. Storefront learning centers, museum-based computer facilities, and special theme parks are some of the new concepts. ComputerTown, USA! is a public access computer literacy project begun in 1979 to serve both adults and children in Menlo Park…

  16. Optimization of stochastic discrete systems and control on complex networks computational networks

    CERN Document Server

    Lozovanu, Dmitrii

    2014-01-01

    This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic con...

  17. Modeling Computer Communication Networks in a Realistic 3D Environment

    Science.gov (United States)

    2010-03-01

    is a proprietary toolkit used for modeling, animation, and rendering of 3D graphics. It is used by numerous video game developers, television and...rendition of the connected battlefield . . . . . . . . . . 2 2. Increasing graph readability . . . . . . . . . . . . . . . . . . . . 8 3. Sample wired...such as pie charts or illustrated diagrams, allow people to easily process information visually in a quicker and more intuitive fashion than by

  18. Applied and computational harmonic analysis on graphs and networks

    Science.gov (United States)

    Irion, Jeff; Saito, Naoki

    2015-09-01

    In recent years, the advent of new sensor technologies and social network infrastructure has provided huge opportunities and challenges for analyzing data recorded on such networks. In the case of data on regular lattices, computational harmonic analysis tools such as the Fourier and wavelet transforms have well-developed theories and proven track records of success. It is therefore quite important to extend such tools from the classical setting of regular lattices to the more general setting of graphs and networks. In this article, we first review basics of graph Laplacian matrices, whose eigenpairs are often interpreted as the frequencies and the Fourier basis vectors on a given graph. We point out, however, that such an interpretation is misleading unless the underlying graph is either an unweighted path or cycle. We then discuss our recent effort of constructing multiscale basis dictionaries on a graph, including the Hierarchical Graph Laplacian Eigenbasis Dictionary and the Generalized Haar-Walsh Wavelet Packet Dictionary, which are viewed as generalizations of the classical hierarchical block DCTs and the Haar-Walsh wavelet packets, respectively, to the graph setting. Finally, we demonstrate the usefulness of our dictionaries by using them to simultaneously segment and denoise 1-D noisy signals sampled on regular lattices, a problem where classical tools have difficulty.

  19. Eye tracking using artificial neural networks for human computer interaction.

    Science.gov (United States)

    Demjén, E; Aboši, V; Tomori, Z

    2011-01-01

    This paper describes an ongoing project that has the aim to develop a low cost application to replace a computer mouse for people with physical impairment. The application is based on an eye tracking algorithm and assumes that the camera and the head position are fixed. Color tracking and template matching methods are used for pupil detection. Calibration is provided by neural networks as well as by parametric interpolation methods. Neural networks use back-propagation for learning and bipolar sigmoid function is chosen as the activation function. The user's eye is scanned with a simple web camera with backlight compensation which is attached to a head fixation device. Neural networks significantly outperform parametric interpolation techniques: 1) the calibration procedure is faster as they require less calibration marks and 2) cursor control is more precise. The system in its current stage of development is able to distinguish regions at least on the level of desktop icons. The main limitation of the proposed method is the lack of head-pose invariance and its relative sensitivity to illumination (especially to incidental pupil reflections).

  20. Teaching Advanced Concepts in Computer Networks: VNUML-UM Virtualization Tool

    Science.gov (United States)

    Ruiz-Martinez, A.; Pereniguez-Garcia, F.; Marin-Lopez, R.; Ruiz-Martinez, P. M.; Skarmeta-Gomez, A. F.

    2013-01-01

    In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advanced computer network concepts through…

  1. Neural network architecture for cognitive navigation in dynamic environments.

    Science.gov (United States)

    Villacorta-Atienza, José Antonio; Makarov, Valeri A

    2013-12-01

    Navigation in time-evolving environments with moving targets and obstacles requires cognitive abilities widely demonstrated by even simplest animals. However, it is a long-standing challenging problem for artificial agents. Cognitive autonomous robots coping with this problem must solve two essential tasks: 1) understand the environment in terms of what may happen and how I can deal with this and 2) learn successful experiences for their further use in an automatic subconscious way. The recently introduced concept of compact internal representation (CIR) provides the ground for both the tasks. CIR is a specific cognitive map that compacts time-evolving situations into static structures containing information necessary for navigation. It belongs to the class of global approaches, i.e., it finds trajectories to a target when they exist but also detects situations when no solution can be found. Here we extend the concept of situations with mobile targets. Then using CIR as a core, we propose a closed-loop neural network architecture consisting of conscious and subconscious pathways for efficient decision-making. The conscious pathway provides solutions to novel situations if the default subconscious pathway fails to guide the agent to a target. Employing experiments with roving robots and numerical simulations, we show that the proposed architecture provides the robot with cognitive abilities and enables reliable and flexible navigation in realistic time-evolving environments. We prove that the subconscious pathway is robust against uncertainty in the sensory information. Thus if a novel situation is similar but not identical to the previous experience (because of, e.g., noisy perception) then the subconscious pathway is able to provide an effective solution.

  2. Usability Studies in Virtual and Traditional Computer Aided Design Environments for Fault Identification

    Science.gov (United States)

    2017-08-08

    Usability Studies In Virtual And Traditional Computer Aided Design Environments For Fault Identification Dr. Syed Adeel Ahmed, Xavier University...the differences in interaction when compared with traditional human computer interfaces. This paper provides analysis via usability study methods...communicate their subjective opinions. Keywords: Usability Analysis; CAVETM (Cave Automatic Virtual Environments); Human Computer Interface (HCI

  3. Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness

    Science.gov (United States)

    2017-08-08

    differences in interaction when compared with traditional human computer interfaces . This paper provides analysis via usability study methods...communicate their subjective opinions. Keywords: Usability Analysis; CAVETM (Cave Automatic Virtual Environments); Human Computer Interface (HCI...Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness Dr. Syed Adeel Ahmed, Xavier University of

  4. Implementing interactive computing in an object-oriented environment

    Directory of Open Access Journals (Sweden)

    Frederic Udina

    2000-04-01

    Full Text Available Statistical computing when input/output is driven by a Graphical User Interface is considered. A proposal is made for automatic control of computational flow to ensure that only strictly required computations are actually carried on. The computational flow is modeled by a directed graph for implementation in any object-oriented programming language with symbolic manipulation capabilities. A complete implementation example is presented to compute and display frequency based piecewise linear density estimators such as histograms or frequency polygons.

  5. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  6. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  7. Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments

    Directory of Open Access Journals (Sweden)

    Alonso-Valerdi Luz María

    2017-01-01

    Full Text Available Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI. Those cognitive processes take place while a user navigates and explores a virtual environment (VE and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI. BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1 set out working environmental conditions, (2 maximize the efficiency of BCI control panels, (3 implement navigation systems based not only on user intentions but also on user emotions, and (4 regulate user mental state to increase the differentiation between control and noncontrol modalities.

  8. Functional network reorganization during learning in a brain-computer interface paradigm.

    Science.gov (United States)

    Jarosiewicz, Beata; Chase, Steven M; Fraser, George W; Velliste, Meel; Kass, Robert E; Schwartz, Andrew B

    2008-12-09

    Efforts to study the neural correlates of learning are hampered by the size of the network in which learning occurs. To understand the importance of learning-related changes in a network of neurons, it is necessary to understand how the network acts as a whole to generate behavior. Here we introduce a paradigm in which the output of a cortical network can be perturbed directly and the neural basis of the compensatory changes studied in detail. Using a brain-computer interface, dozens of simultaneously recorded neurons in the motor cortex of awake, behaving monkeys are used to control the movement of a cursor in a three-dimensional virtual-reality environment. This device creates a precise, well-defined mapping between the firing of the recorded neurons and an expressed behavior (cursor movement). In a series of experiments, we force the animal to relearn the association between neural firing and cursor movement in a subset of neurons and assess how the network changes to compensate. We find that changes in neural activity reflect not only an alteration of behavioral strategy but also the relative contributions of individual neurons to the population error signal.

  9. Leveraging Fog Computing for Scalable IoT Datacenter Using Spine-Leaf Network Topology

    Directory of Open Access Journals (Sweden)

    K. C. Okafor

    2017-01-01

    Full Text Available With the Internet of Everything (IoE paradigm that gathers almost every object online, huge traffic workload, bandwidth, security, and latency issues remain a concern for IoT users in today’s world. Besides, the scalability requirements found in the current IoT data processing (in the cloud can hardly be used for applications such as assisted living systems, Big Data analytic solutions, and smart embedded applications. This paper proposes an extended cloud IoT model that optimizes bandwidth while allowing edge devices (Internet-connected objects/devices to smartly process data without relying on a cloud network. Its integration with a massively scaled spine-leaf (SL network topology is highlighted. This is contrasted with a legacy multitier layered architecture housing network services and routing policies. The perspective offered in this paper explains how low-latency and bandwidth intensive applications can transfer data to the cloud (and then back to the edge application without impacting QoS performance. Consequently, a spine-leaf Fog computing network (SL-FCN is presented for reducing latency and network congestion issues in a highly distributed and multilayer virtualized IoT datacenter environment. This approach is cost-effective as it maximizes bandwidth while maintaining redundancy and resiliency against failures in mission critical applications.

  10. Computationally efficient measure of topological redundancy of biological and social networks

    Science.gov (United States)

    Albert, Réka; Dasgupta, Bhaskar; Hegde, Rashmi; Sivanathan, Gowri Sangeetha; Gitter, Anthony; Gürsoy, Gamze; Paul, Pradyut; Sontag, Eduardo

    2011-09-01

    It is well known that biological and social interaction networks have a varying degree of redundancy, though a consensus of the precise cause of this is so far lacking. In this paper, we introduce a topological redundancy measure for labeled directed networks that is formal, computationally efficient, and applicable to a variety of directed networks such as cellular signaling, and metabolic and social interaction networks. We demonstrate the computational efficiency of our measure by computing its value and statistical significance on a number of biological and social networks with up to several thousands of nodes and edges. Our results suggest a number of interesting observations: (1) Social networks are more redundant that their biological counterparts, (2) transcriptional networks are less redundant than signaling networks, (3) the topological redundancy of the C. elegans metabolic network is largely due to its inclusion of currency metabolites, and (4) the redundancy of signaling networks is highly (negatively) correlated with the monotonicity of their dynamics.

  11. 10 CFR 73.54 - Protection of digital computer and communication systems and networks.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Protection of digital computer and communication systems... computer and communication systems and networks. By November 23, 2009 each licensee currently licensed to... provide high assurance that digital computer and communication systems and networks are adequately...

  12. Representing spatial information in a computational model for network management

    Science.gov (United States)

    Blaisdell, James H.; Brownfield, Thomas F.

    1994-01-01

    While currently available relational database management systems (RDBMS) allow inclusion of spatial information in a data model, they lack tools for presenting this information in an easily comprehensible form. Computer-aided design (CAD) software packages provide adequate functions to produce drawings, but still require manual placement of symbols and features. This project has demonstrated a bridge between the data model of an RDBMS and the graphic display of a CAD system. It is shown that the CAD system can be used to control the selection of data with spatial components from the database and then quickly plot that data on a map display. It is shown that the CAD system can be used to extract data from a drawing and then control the insertion of that data into the database. These demonstrations were successful in a test environment that incorporated many features of known working environments, suggesting that the techniques developed could be adapted for practical use.

  13. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment.

    Science.gov (United States)

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-06-17

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment.

  14. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment

    Science.gov (United States)

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-01-01

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment. PMID:28629131

  15. DIMACS Workshop on Interconnection Networks and Mapping, and Scheduling Parallel Computations

    CERN Document Server

    Rosenberg, Arnold L; Sotteau, Dominique; NSF Science and Technology Center in Discrete Mathematics and Theoretical Computer Science; Interconnection networks and mapping and scheduling parallel computations

    1995-01-01

    The interconnection network is one of the most basic components of a massively parallel computer system. Such systems consist of hundreds or thousands of processors interconnected to work cooperatively on computations. One of the central problems in parallel computing is the task of mapping a collection of processes onto the processors and routing network of a parallel machine. Once this mapping is done, it is critical to schedule computations within and communication among processor from universities and laboratories, as well as practitioners involved in the design, implementation, and application of massively parallel systems. Focusing on interconnection networks of parallel architectures of today and of the near future , the book includes topics such as network topologies,network properties, message routing, network embeddings, network emulation, mappings, and efficient scheduling. inputs for a process are available where and when the process is scheduled to be computed. This book contains the refereed pro...

  16. The Effects of a Robot Game Environment on Computer Programming Education for Elementary School Students

    Science.gov (United States)

    Shim, Jaekwoun; Kwon, Daiyoung; Lee, Wongyu

    2017-01-01

    In the past, computer programming was perceived as a task only carried out by computer scientists; in the 21st century, however, computer programming is viewed as a critical and necessary skill that everyone should learn. In order to improve teaching of problem-solving abilities in a computing environment, extensive research is being done on…

  17. Students experiences with collaborative learning in asynchronous computer-supported collaborative learning environments.

    NARCIS (Netherlands)

    Dewiyanti, Silvia; Brand-Gruwel, Saskia; Jochems, Wim; Broers, Nick

    2008-01-01

    Dewiyanti, S., Brand-Gruwel, S., Jochems, W., & Broers, N. (2007). Students experiences with collaborative learning in asynchronous computer-supported collaborative learning environments. Computers in Human Behavior, 23, 496-514.

  18. Securing the Data Storage and Processing in Cloud Computing Environment

    Science.gov (United States)

    Owens, Rodney

    2013-01-01

    Organizations increasingly utilize cloud computing architectures to reduce costs and energy consumption both in the data warehouse and on mobile devices by better utilizing the computing resources available. However, the security and privacy issues with publicly available cloud computing infrastructures have not been studied to a sufficient depth…

  19. A FUNCTIONAL MODEL OF COMPUTER-ORIENTED LEARNING ENVIRONMENT OF A POST-DEGREE PEDAGOGICAL EDUCATION

    OpenAIRE

    Kateryna R. Kolos

    2014-01-01

    The study substantiates the need for a systematic study of the functioning of computer-oriented learning environment of a post-degree pedagogical education; it is determined the definition of “functional model of computer-oriented learning environment of a post-degree pedagogical education”; it is built a functional model of computer-oriented learning environment of a post-degree pedagogical education in accordance with the functions of business, information and communication technology, acad...

  20. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Adel Sarofim; Connie Senior

    2004-12-22

    , immersive environment. The Virtual Engineering Framework (VEF), in effect a prototype framework, was developed through close collaboration with NETL supported research teams from Iowa State University Virtual Reality Applications Center (ISU-VRAC) and Carnegie Mellon University (CMU). The VEF is open source, compatible across systems ranging from inexpensive desktop PCs to large-scale, immersive facilities and provides support for heterogeneous distributed computing of plant simulations. The ability to compute plant economics through an interface that coupled the CMU IECM tool to the VEF was demonstrated, and the ability to couple the VEF to Aspen Plus, a commercial flowsheet modeling tool, was demonstrated. Models were interfaced to the framework using VES-Open. Tests were performed for interfacing CAPE-Open-compliant models to the framework. Where available, the developed models and plant simulations have been benchmarked against data from the open literature. The VEF has been installed at NETL. The VEF provides simulation capabilities not available in commercial simulation tools. It provides DOE engineers, scientists, and decision makers with a flexible and extensible simulation system that can be used to reduce the time, technical risk, and cost to develop the next generation of advanced, coal-fired power systems that will have low emissions and high efficiency. Furthermore, the VEF provides a common simulation system that NETL can use to help manage Advanced Power Systems Research projects, including both combustion- and gasification-based technologies.

  1. The Integrated Computational Environment for Airbreathing Hypersonic Flight Vehicle Modeling and Design Evaluation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — An integrated computational environment for multidisciplinary, physics-based simulation and analyses of airbreathing hypersonic flight vehicles will be developed....

  2. Long-term changes of information environments and computer anxiety of nurse administrators in Japan.

    Science.gov (United States)

    Majima, Yukie; Izumi, Takako

    2013-01-01

    In Japan, medical information systems, including electronic medical records, are being introduced increasingly at medical and nursing fields. Nurse administrators, who are involved in the introduction of medical information systems and who must make proper judgment, are particularly required to have at least minimal knowledge of computers and networks and the ability to think about easy-to-use medical information systems. However, few of the current generation of nurse administrators studied information science subjects in their basic education curriculum. It can be said that information education for nurse administrators has become a pressing issue. Consequently, in this study, we conducted a survey of participants taking the first level program of the education course for Japanese certified nurse administrators to ascertain the actual conditions, such as the information environments that nurse administrators are in, their anxiety attitude to computers. Comparisons over the seven years since 2004 revealed that although introduction of electronic medical records in hospitals was progressing, little change in attributes of participants taking the course was observed, such as computer anxiety.

  3. Dynamic Security Assessment Of Computer Networks In Siem-Systems

    Directory of Open Access Journals (Sweden)

    Elena Vladimirovna Doynikova

    2015-10-01

    Full Text Available The paper suggests an approach to the security assessment of computer networks. The approach is based on attack graphs and intended for Security Information and Events Management systems (SIEM-systems. Key feature of the approach consists in the application of the multilevel security metrics taxonomy. The taxonomy allows definition of the system profile according to the input data used for the metrics calculation and techniques of security metrics calculation. This allows specification of the security assessment in near real time, identification of previous and future attacker steps, identification of attackers goals and characteristics. A security assessment system prototype is implemented for the suggested approach. Analysis of its operation is conducted for several attack scenarios.

  4. Computers and networks in the age of globalization

    DEFF Research Database (Denmark)

    Bloch Rasmussen, Leif; Beardon, Colin; Munari, Silvio

    In modernity, an individual identity was constituted from civil society, while in a globalized network society, human identity, if it develops at all, must grow from communal resistance. A communal resistance to an abstract conceptualized world, where there is no possibility for perception...... and experience of power and therefore no possibility for human choice and action, is of utmost importance for the constituting of human choosers and actors. This book therefore sets focus on those human choosers and actors wishing to read and enjoy the papers as they are actually perceiving and experiencing...... for Information Processing (IFIP) and held in Geneva, Switzerland in August 1998. Since the first HCC conference in 1974, IFIP's Technical Committee 9 has endeavoured to set the agenda for human choices and human actions vis-a-vis computers....

  5. Network Computing Infrastructure to Share Tools and Data in Global Nuclear Energy Partnership

    Science.gov (United States)

    Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya

    CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer-Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP.

  6. IMPLEMENTATION OF WIRED AND WIRELESS NETWORK IN ACADEMIC ENVIRONMENT

    OpenAIRE

    Raman Bhanot*

    2017-01-01

    Formerly, wired network has verified its capabilities but in this day and age wireless communication has emerged as a robust and most intellectual communication technique. Both the types have its own merits and demerits based on its network characteristics. Wired and wireless networking has different hardware necessities, ranges, mobility, reliability and benefits.The aim of the paper is to provide a simulated outlook of Wireless and Wired Network covering whole campus. This simulation has be...

  7. Multi-agent System for Controlling a Cloud Computing Environment

    OpenAIRE

    de la Prieta Pintado, Fernando; Navarro Cáceres, María; García, José A.; González, Roberto; Rodríguez González, Sara

    2013-01-01

    Nowadays, a number of computing paradigms have been proposed, of which the latest one is known as Cloud computing.Cloud computing is revolutionizing the services provided through the Internet, and is continually adapting itself in order to maintain the quality of its services. In this paper is proposes a cloud platform for storing information and files by following the cloud paradigm. Moreover, a cloud-based application has been developed to validate the services provided by the platform.

  8. Cloud Computing and Virtual Desktop Infrastructures in Afloat Environments

    Science.gov (United States)

    2012-06-01

    computing is an instance of an architecture, whereas SOA is a pattern of architectures. In sum, “SOA is more holistic and strategic, meaning it deals...the regular day to day operations of a ship that can be processed via the CRUD computer programming functions (Create, Read, Update, and Delete...able to meet the basic CRUD computer programming functions). Furthermore, ships in a strike group should be equipped with the same information systems

  9. Integration of a network aware traffic generation device into a computer network emulation platform

    CSIR Research Space (South Africa)

    Von Solms, S

    2014-07-01

    Full Text Available Flexible, open source network emulation tools can provide network researchers with significant benefits regarding network behaviour and performance. The evaluation of these networks can benefit greatly from the integration of realistic, network...

  10. Dynamic mechanisms of cell rigidity sensing: insights from a computational model of actomyosin networks.

    Directory of Open Access Journals (Sweden)

    Carlos Borau

    Full Text Available Cells modulate themselves in response to the surrounding environment like substrate elasticity, exhibiting structural reorganization driven by the contractility of cytoskeleton. The cytoskeleton is the scaffolding structure of eukaryotic cells, playing a central role in many mechanical and biological functions. It is composed of a network of actins, actin cross-linking proteins (ACPs, and molecular motors. The motors generate contractile forces by sliding couples of actin filaments in a polar fashion, and the contractile response of the cytoskeleton network is known to be modulated also by external stimuli, such as substrate stiffness. This implies an important role of actomyosin contractility in the cell mechano-sensing. However, how cells sense matrix stiffness via the contractility remains an open question. Here, we present a 3-D Brownian dynamics computational model of a cross-linked actin network including the dynamics of molecular motors and ACPs. The mechano-sensing properties of this active network are investigated by evaluating contraction and stress in response to different substrate stiffness. Results demonstrate two mechanisms that act to limit internal stress: (i In stiff substrates, motors walk until they exert their maximum force, leading to a plateau stress that is independent of substrate stiffness, whereas (ii in soft substrates, motors walk until they become blocked by other motors or ACPs, leading to submaximal stress levels. Therefore, this study provides new insights into the role of molecular motors in the contraction and rigidity sensing of cells.

  11. Computing environment for the ASSIST data warehouse at Lawrence Livermore National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Shuk, K.

    1995-11-01

    The current computing environment for the ASSIST data warehouse at Lawrence Livermore National Laboratory is that of a central server that is accessed by a terminal or terminal emulator. The initiative to move to a client/server environment is strong, backed by desktop machines becoming more and more powerful. The desktop machines can now take on parts of tasks once run entirely on the central server, making the whole environment computationally more efficient as a result. Services are tasks that are repeated throughout the environment such that it makes sense to share them; tasks such as email, user authentication and file transfer are services. The new client/;server environment needs to determine which services must be included in the environment for basic functionality. These services then unify the computing environment, not only for the forthcoming ASSIST+, but for Administrative Information Systems as a whole, joining various server platforms with heterogeneous desktop computing platforms.

  12. Characterization of physiological networks in sleep apnea patients using artificial neural networks for Granger causality computation

    Science.gov (United States)

    Cárdenas, Jhon; Orjuela-Cañón, Alvaro D.; Cerquera, Alexander; Ravelo, Antonio

    2017-11-01

    Different studies have used Transfer Entropy (TE) and Granger Causality (GC) computation to quantify interconnection between physiological systems. These methods have disadvantages in parametrization and availability in analytic formulas to evaluate the significance of the results. Other inconvenience is related with the assumptions in the distribution of the models generated from the data. In this document, the authors present a way to measure the causality that connect the Central Nervous System (CNS) and the Cardiac System (CS) in people diagnosed with obstructive sleep apnea syndrome (OSA) before and during treatment with continuous positive air pressure (CPAP). For this purpose, artificial neural networks were used to obtain models for GC computation, based on time series of normalized powers calculated from electrocardiography (EKG) and electroencephalography (EEG) signals recorded in polysomnography (PSG) studies.

  13. A Semantic Based Policy Management Framework for Cloud Computing Environments

    Science.gov (United States)

    Takabi, Hassan

    2013-01-01

    Cloud computing paradigm has gained tremendous momentum and generated intensive interest. Although security issues are delaying its fast adoption, cloud computing is an unstoppable force and we need to provide security mechanisms to ensure its secure adoption. In this dissertation, we mainly focus on issues related to policy management and access…

  14. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.; Ratterman, Joseph D.

    2018-01-30

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  15. Construction of high-dimensional neural network potentials using environment-dependent atom pairs.

    Science.gov (United States)

    Jose, K V Jovan; Artrith, Nongnuch; Behler, Jörg

    2012-05-21

    An accurate determination of the potential energy is the crucial step in computer simulations of chemical processes, but using electronic structure methods on-the-fly in molecular dynamics (MD) is computationally too demanding for many systems. Constructing more efficient interatomic potentials becomes intricate with increasing dimensionality of the potential-energy surface (PES), and for numerous systems the accuracy that can be achieved is still not satisfying and far from the reliability of first-principles calculations. Feed-forward neural networks (NNs) have a very flexible functional form, and in recent years they have been shown to be an accurate tool to construct efficient PESs. High-dimensional NN potentials based on environment-dependent atomic energy contributions have been presented for a number of materials. Still, these potentials may be improved by a more detailed structural description, e.g., in form of atom pairs, which directly reflect the atomic interactions and take the chemical environment into account. We present an implementation of an NN method based on atom pairs, and its accuracy and performance are compared to the atom-based NN approach using two very different systems, the methanol molecule and metallic copper. We find that both types of NN potentials provide an excellent description of both PESs, with the pair-based method yielding a slightly higher accuracy making it a competitive alternative for addressing complex systems in MD simulations.

  16. Construction of high-dimensional neural network potentials using environment-dependent atom pairs

    Science.gov (United States)

    Jose, K. V. Jovan; Artrith, Nongnuch; Behler, Jörg

    2012-05-01

    An accurate determination of the potential energy is the crucial step in computer simulations of chemical processes, but using electronic structure methods on-the-fly in molecular dynamics (MD) is computationally too demanding for many systems. Constructing more efficient interatomic potentials becomes intricate with increasing dimensionality of the potential-energy surface (PES), and for numerous systems the accuracy that can be achieved is still not satisfying and far from the reliability of first-principles calculations. Feed-forward neural networks (NNs) have a very flexible functional form, and in recent years they have been shown to be an accurate tool to construct efficient PESs. High-dimensional NN potentials based on environment-dependent atomic energy contributions have been presented for a number of materials. Still, these potentials may be improved by a more detailed structural description, e.g., in form of atom pairs, which directly reflect the atomic interactions and take the chemical environment into account. We present an implementation of an NN method based on atom pairs, and its accuracy and performance are compared to the atom-based NN approach using two very different systems, the methanol molecule and metallic copper. We find that both types of NN potentials provide an excellent description of both PESs, with the pair-based method yielding a slightly higher accuracy making it a competitive alternative for addressing complex systems in MD simulations.

  17. Generalized Load Sharing for Homogeneous Networks of Distributed Environment

    Directory of Open Access Journals (Sweden)

    A. Satheesh

    2008-01-01

    Full Text Available We propose a method for job migration policies by considering effective usage of global memory in addition to CPU load sharing in distributed systems. When a node is identified for lacking sufficient memory space to serve jobs, one or more jobs of the node will be migrated to remote nodes with low memory allocations. If the memory space is sufficiently large, the jobs will be scheduled by a CPU-based load sharing policy. Following the principle of sharing both CPU and memory resources, we present several load sharing alternatives. Our objective is to reduce the number of page faults caused by unbalanced memory allocations for jobs among distributed nodes, so that overall performance of a distributed system can be significantly improved. We have conducted trace-driven simulations to compare CPU-based load sharing policies with our policies. We show that our load sharing policies not only improve performance of memory bound jobs, but also maintain the same load sharing quality as the CPU-based policies for CPU-bound jobs. Regarding remote execution and preemptive migration strategies, our experiments indicate that a strategy selection in load sharing is dependent on the amount of memory demand of jobs, remote execution is more effective for memory-bound jobs, and preemptive migration is more effective for CPU-bound jobs. Our CPU-memory-based policy using either high performance or high throughput approach and using the remote execution strategy performs the best for both CPU-bound and memory-bound job in homogeneous networks of distributed environment.

  18. Distributed optimization-based control of multi-agent networks in complex environments

    CERN Document Server

    Zhu, Minghui

    2015-01-01

    This book offers a concise and in-depth exposition of specific algorithmic solutions for distributed optimization based control of multi-agent networks and their performance analysis. It synthesizes and analyzes distributed strategies for three collaborative tasks: distributed cooperative optimization, mobile sensor deployment and multi-vehicle formation control. The book integrates miscellaneous ideas and tools from dynamic systems, control theory, graph theory, optimization, game theory and Markov chains to address the particular challenges introduced by such complexities in the environment as topological dynamics, environmental uncertainties, and potential cyber-attack by human adversaries. The book is written for first- or second-year graduate students in a variety of engineering disciplines, including control, robotics, decision-making, optimization and algorithms and with backgrounds in aerospace engineering, computer science, electrical engineering, mechanical engineering and operations research. Resea...

  19. Comparing Notes: Collaborative Networks, Breeding Environments, and Organized Crime

    Science.gov (United States)

    Hernández, Alejandro

    Collaborative network theory can be useful in refining current understanding of criminal networks and aid in understanding their evolution. Drug trafficking organizations that operate in the region directly north of Colombia’s Valle del Cauca department and the “collection agencies” that operate in the Colombian city of Cali have abandoned hierarchical organizational structures and have become networked-based entities. Through the exposition of Camarinha-Matos and Afsarmanesh’s business networking ideas, this chapter examines the similarities and differences between the application of collaborative networks in licit enterprises, such as small and medium enterprises in Europe, and how the networks might be used by illicit criminal enterprises in Colombia.

  20. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas E [ORNL; Schuman, Catherine D [ORNL; Young, Steven R [ORNL; Patton, Robert M [ORNL; Spedalieri, Federico [University of Southern California, Information Sciences Institute; Liu, Jeremy [University of Southern California, Information Sciences Institute; Yao, Ke-Thia [University of Southern California, Information Sciences Institute; Rose, Garrett [University of Tennessee (UT); Chakma, Gangotree [University of Tennessee (UT)

    2016-01-01

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.

  1. Monte Carlo in radiotherapy: experience in a distributed computational environment

    Science.gov (United States)

    Caccia, B.; Mattia, M.; Amati, G.; Andenna, C.; Benassi, M.; D'Angelo, A.; Frustagli, G.; Iaccarino, G.; Occhigrossi, A.; Valentini, S.

    2007-06-01

    New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapeutical treatment plan, and it is important to integrate sophisticated mathematical models and advanced computing knowledge into the treatment planning (TP) process. We present some results about using Monte Carlo (MC) codes in dose calculation for treatment planning. A distributed computing resource located in the Technologies and Health Department of the Italian National Institute of Health (ISS) along with other computer facilities (CASPUR - Inter-University Consortium for the Application of Super-Computing for Universities and Research) has been used to perform a fully complete MC simulation to compute dose distribution on phantoms irradiated with a radiotherapy accelerator. Using BEAMnrc and GEANT4 MC based codes we calculated dose distributions on a plain water phantom and air/water phantom. Experimental and calculated dose values below ±2% (for depth between 5 mm and 130 mm) were in agreement both in PDD (Percentage Depth Dose) and transversal sections of the phantom. We consider these results a first step towards a system suitable for medical physics departments to simulate a complete treatment plan using remote computing facilities for MC simulations.

  2. Monte Carlo in radiotherapy: experience in a distributed computational environment

    Energy Technology Data Exchange (ETDEWEB)

    Caccia, B [Istituto Superiore di Sanita (ISS) and Istituto Nazionale di Fisica Nucleare (INFN), Rome (Italy); Mattia, M [Istituto Superiore di Sanita (ISS) and Istituto Nazionale di Fisica Nucleare (INFN), Rome (Italy); Amati, G [Inter-University Consortium for the Application of Super-Computing for Universities and Research (CASPUR), Rome (Italy); Andenna, C [Istituto Superiore Prevenzione e Sicurezza del Lavoro (ISPESL), Rome (Italy); Benassi, M [Medical Physics Department, Istituto Regina Elena, Rome (Italy); D' Angelo, A [Medical Physics Department, Istituto Regina Elena, Rome (Italy); Frustagli, G [Istituto Superiore di Sanita (ISS) and Istituto Nazionale di Fisica Nucleare (INFN), Rome (Italy); Iaccarino, G [Medical Physics Department, Istituto Regina Elena, Rome (Italy); Occhigrossi, A [Istituto Superiore di Sanita (ISS) and Istituto Nazionale di Fisica Nucleare (INFN), Rome (Italy); Valentini, S [Istituto Superiore di Sanita (ISS) and Istituto Nazionale di Fisica Nucleare (INFN), Rome (Italy)

    2007-06-15

    New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapeutical treatment plan, and it is important to integrate sophisticated mathematical models and advanced computing knowledge into the treatment planning (TP) process. We present some results about using Monte Carlo (MC) codes in dose calculation for treatment planning. A distributed computing resource located in the Technologies and Health Department of the Italian National Institute of Health (ISS) along with other computer facilities (CASPUR - Inter-University Consortium for the Application of Super-Computing for Universities and Research) has been used to perform a fully complete MC simulation to compute dose distribution on phantoms irradiated with a radiotherapy accelerator. Using BEAMnrc and GEANT4 MC based codes we calculated dose distributions on a plain water phantom and air/water phantom. Experimental and calculated dose values below {+-}2% (for depth between 5 mm and 130 mm) were in agreement both in PDD (Percentage Depth Dose) and transversal sections of the phantom. We consider these results a first step towards a system suitable for medical physics departments to simulate a complete treatment plan using remote computing facilities for MC simulations.

  3. Construction of a Digital Learning Environment Based on Cloud Computing

    Science.gov (United States)

    Ding, Jihong; Xiong, Caiping; Liu, Huazhong

    2015-01-01

    Constructing the digital learning environment for ubiquitous learning and asynchronous distributed learning has opened up immense amounts of concrete research. However, current digital learning environments do not fully fulfill the expectations on supporting interactive group learning, shared understanding and social construction of knowledge.…

  4. Performance Analysis of Routing Protocols in Ad-hoc and Sensor Networking Environments

    Directory of Open Access Journals (Sweden)

    L. Gavrilovska

    2009-06-01

    Full Text Available Ad-hoc and sensor networks are becoming an increasingly popular wireless networking concepts lately. This paper analyzes and compares prominent routing schemes in these networking environments. The knowledge obtained can serve users to better understand short range wireless network solutions thus leading to options for implementation in various scenarios. In addition, it should aid researchers develop protocol improvements reliable for the technologies of interest.

  5. Locating hardware faults in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  6. Design and Implement of Astronomical Cloud Computing Environment In China-VO

    Science.gov (United States)

    Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu

    2017-06-01

    Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.

  7. Human face recognition using eigenface in cloud computing environment

    Science.gov (United States)

    Siregar, S. T. M.; Syahputra, M. F.; Rahmat, R. F.

    2018-02-01

    Doing a face recognition for one single face does not take a long time to process, but if we implement attendance system or security system on companies that have many faces to be recognized, it will take a long time. Cloud computing is a computing service that is done not on a local device, but on an internet connected to a data center infrastructure. The system of cloud computing also provides a scalability solution where cloud computing can increase the resources needed when doing larger data processing. This research is done by applying eigenface while collecting data as training data is also done by using REST concept to provide resource, then server can process the data according to existing stages. After doing research and development of this application, it can be concluded by implementing Eigenface, recognizing face by applying REST concept as endpoint in giving or receiving related information to be used as a resource in doing model formation to do face recognition.

  8. Computer Aided Design Tools for Extreme Environment Electronics Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project aims to provide Computer Aided Design (CAD) tools for radiation-tolerant, wide-temperature-range digital, analog, mixed-signal, and radio-frequency...

  9. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  10. Small computer planning and selection in the organized sports environment

    OpenAIRE

    Hylton, Donna G.

    1987-01-01

    Sports organizations, like other businesses, have a need to ingest data and therefrom to produce serviceable information for purposes of communications and decision making. However, sports personnel and general business personnel typically differ in educational background and computer experience, the sports person often unaware of the advantages offered by computers. Consequently, this thesis's purpose is to: l. Introduce sports organizations to the potential usefulness of small comp...

  11. Abstracting massive data for lightweight intrusion detection in computer networks

    KAUST Repository

    Wang, Wei

    2016-10-15

    Anomaly intrusion detection in big data environments calls for lightweight models that are able to achieve real-time performance during detection. Abstracting audit data provides a solution to improve the efficiency of data processing in intrusion detection. Data abstraction refers to abstract or extract the most relevant information from the massive dataset. In this work, we propose three strategies of data abstraction, namely, exemplar extraction, attribute selection and attribute abstraction. We first propose an effective method called exemplar extraction to extract representative subsets from the original massive data prior to building the detection models. Two clustering algorithms, Affinity Propagation (AP) and traditional . k-means, are employed to find the exemplars from the audit data. . k-Nearest Neighbor (k-NN), Principal Component Analysis (PCA) and one-class Support Vector Machine (SVM) are used for the detection. We then employ another two strategies, attribute selection and attribute extraction, to abstract audit data for anomaly intrusion detection. Two http streams collected from a real computing environment as well as the KDD\\'99 benchmark data set are used to validate these three strategies of data abstraction. The comprehensive experimental results show that while all the three strategies improve the detection efficiency, the AP-based exemplar extraction achieves the best performance of data abstraction.

  12. Energy Research and Development Administration Ad Hoc Computer Networking Group: experimental program

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, I.

    1975-03-19

    The Ad Hoc Computer Networking Group was established to investigate the potential advantages and costs of newer forms of remote resource sharing and computer networking. The areas of research and investigation that are within the scope of the ERDA CNG are described. (GHT)

  13. Toward a Practical Technique to Halt Multiple Virus Outbreaks on Computer Networks

    OpenAIRE

    Hole, Kjell Jørgen

    2012-01-01

    The author analyzes a technique to prevent multiple simultaneous virus epidemics on any vulnerable computer network with inhomogeneous topology. The technique immunizes a small fraction of the computers and utilizes diverse software platforms to halt the virus outbreaks. The halting technique is of practical interest since a network's detailed topology need not be known.

  14. Bluetooth Roaming for Sensor Network System in Clinical Environment.

    Science.gov (United States)

    Kuroda, Tomohiro; Noma, Haruo; Takase, Kazuhiko; Sasaki, Shigeto; Takemura, Tadamasa

    2015-01-01

    A sensor network is key infrastructure for advancing a hospital information system (HIS). The authors proposed a method to provide roaming functionality for Bluetooth to realize a Bluetooth-based sensor network, which is suitable to connect clinical devices. The proposed method makes the average response time of a Bluetooth connection less than one second by making the master device repeat the inquiry process endlessly and modifies parameters of the inquiry process. The authors applied the developed sensor network for daily clinical activities in an university hospital, and confirmed the stabilitya and effectiveness of the sensor network. As Bluetooth becomes a quite common wireless interface for medical devices, the proposed protocol that realizes Bluetooth-based sensor network enables HIS to equip various clinical devices and, consequently, lets information and communication technologies advance clinical services.

  15. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    Science.gov (United States)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  16. Finding Multi-step Attacks in Computer Networks using Heuristic Search and Mobile Ambients

    NARCIS (Netherlands)

    Nunes Leal Franqueira, V.

    2009-01-01

    An important aspect of IT security governance is the proactive and continuous identification of possible attacks in computer networks. This is complicated due to the complexity and size of networks, and due to the fact that usually network attacks are performed in several steps. This thesis proposes

  17. Mechanism Aligning (Gateway) Between Operating System Based on Computer Networks in Unocal

    OpenAIRE

    Vera Morina Oktavia Carla; Drs.Ida Ayu Wiastiti, M.KOM Drs.Ida Ayu Wiastiti, M.KOM

    1997-01-01

    Computer network is a system that allows for the sharing of information among users.Use of operating systems in the network settings is an absolute must. Replacement ofthe network operating system should really consider the possibility of impacts.Aligning is one solution in order to connect two different systems.

  18. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    Science.gov (United States)

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  19. Discussion on the Technology and Method of Computer Network Security Management

    Science.gov (United States)

    Zhou, Jianlei

    2017-09-01

    With the rapid development of information technology, the application of computer network technology has penetrated all aspects of society, changed people's way of life work to a certain extent, brought great convenience to people. But computer network technology is not a panacea, it can promote the function of social development, but also can cause damage to the community and the country. Due to computer network’ openness, easiness of sharing and other characteristics, it had a very negative impact on the computer network security, especially the loopholes in the technical aspects can cause damage on the network information. Based on this, this paper will do a brief analysis on the computer network security management problems and security measures.

  20. Context-aware Cloud Computing for Personal Learning Environment

    OpenAIRE

    Chen, Feng; Al-Bayatti, Ali Hilal; Siewe, Francois

    2016-01-01

    Virtual learning means to learn from social interactions in a virtual platform that enables people to study anywhere and at any time. Current Virtual Learning Environments (VLEs) are a range of integrated web based applications to support and enhance the education. Normally, VLEs are institution centric; are owned by the institutions and are designed to support formal learning, which do not support lifelong learning. These limitations led to the research of Personal Learning Environments (PLE...

  1. S-TSP: a novel routing algorithm for In-network processing of recursive computation in wireless sensor networks

    Science.gov (United States)

    Tang, Tingfang; Guo, Peng; Liu, Xuefeng

    2016-10-01

    In-network processing is an efficient way to reduce the transmission cost in wireless sensor networks (WSNs). The in-network processing of many domain-specific computation tasks in WSNs usually requires to losslessly distribute the computation of the tasks into the sensor nodes, which is however usually not easy. In this paper we are concerned with such kind of tasks whose computation can only be partitioned into recursive computation mode. To distribute the recursive computations into WSNs, it is required to design an appropriate single in-network processing path, along which the intermediate data is forwarded and updated in the WSNs. We address the recursive computation with constant size of computation result, e.g., distributed least square estimation (D-LSE). Finding the optimal in-network processing path to minimize the total transmission cost in WSNs, is a new problem and seldom studied before. To solve it, we propose a novel routing algorithm called as S-TSP, and compare it with some other greedy algorithms. Extensive simulations are conducted, and the results show the good performance of the proposed S-TSP algorithm.

  2. An IoT-Based Computational Framework for Healthcare Monitoring in Mobile Environments.

    Science.gov (United States)

    Mora, Higinio; Gil, David; Terol, Rafael Muñoz; Azorín, Jorge; Szymanski, Julian

    2017-10-10

    The new Internet of Things paradigm allows for small devices with sensing, processing and communication capabilities to be designed, which enable the development of sensors, embedded devices and other 'things' ready to understand the environment. In this paper, a distributed framework based on the internet of things paradigm is proposed for monitoring human biomedical signals in activities involving physical exertion. The main advantages and novelties of the proposed system is the flexibility in computing the health application by using resources from available devices inside the body area network of the user. This proposed framework can be applied to other mobile environments, especially those where intensive data acquisition and high processing needs take place. Finally, we present a case study in order to validate our proposal that consists in monitoring footballers' heart rates during a football match. The real-time data acquired by these devices presents a clear social objective of being able to predict not only situations of sudden death but also possible injuries.

  3. An IoT-Based Computational Framework for Healthcare Monitoring in Mobile Environments

    Science.gov (United States)

    Szymanski, Julian

    2017-01-01

    The new Internet of Things paradigm allows for small devices with sensing, processing and communication capabilities to be designed, which enable the development of sensors, embedded devices and other ‘things’ ready to understand the environment. In this paper, a distributed framework based on the internet of things paradigm is proposed for monitoring human biomedical signals in activities involving physical exertion. The main advantages and novelties of the proposed system is the flexibility in computing the health application by using resources from available devices inside the body area network of the user. This proposed framework can be applied to other mobile environments, especially those where intensive data acquisition and high processing needs take place. Finally, we present a case study in order to validate our proposal that consists in monitoring footballers’ heart rates during a football match. The real-time data acquired by these devices presents a clear social objective of being able to predict not only situations of sudden death but also possible injuries. PMID:28994743

  4. An IoT-Based Computational Framework for Healthcare Monitoring in Mobile Environments

    Directory of Open Access Journals (Sweden)

    Higinio Mora

    2017-10-01

    Full Text Available The new Internet of Things paradigm allows for small devices with sensing, processing and communication capabilities to be designed, which enable the development of sensors, embedded devices and other ‘things’ ready to understand the environment. In this paper, a distributed framework based on the internet of things paradigm is proposed for monitoring human biomedical signals in activities involving physical exertion. The main advantages and novelties of the proposed system is the flexibility in computing the health application by using resources from available devices inside the body area network of the user. This proposed framework can be applied to other mobile environments, especially those where intensive data acquisition and high processing needs take place. Finally, we present a case study in order to validate our proposal that consists in monitoring footballers’ heart rates during a football match. The real-time data acquired by these devices presents a clear social objective of being able to predict not only situations of sudden death but also possible injuries.

  5. Multi-Interactive Teaching Model of College English in Computer Information Technology Environment

    Directory of Open Access Journals (Sweden)

    Jianlan Wen

    2017-12-01

    Full Text Available The multi-interactive teaching mode of college English takes students as the center and the task as the link, realizing the combination of classroom teaching and network-based autonomous learning in information technology environment. This teaching model is characterized by integrated teaching methods, open teaching environment and hierarchical teaching management compared with the traditional classroom teaching methods. This paper took the interactive teaching model of college English based on computer information technology as the research object and the "construction", "interaction" and "cooperation" as the core guiding ideology based on the learning theory of constructivism and communicative theory, guiding the design of listening, speaking and reading experiments in college English teaching under the multi-interactive teaching model. This paper first described the application and concept of multivariate interactive teaching mode, and then introduced the related learning theories. By selecting non-English major college students of 2015 from Hunan Normal University as the application object of multi-interactive teaching model, this paper validated the effectiveness of multi-interactive teaching model for college English teaching through detailed teaching experimental design and comparative analysis of teaching achievements. Under the background of the rapid development of information technology, the new multi-interactive teaching mode opens up the multi-dimensional and multi-form language learning process, which is of innovative and practical significance for the teaching of college English.

  6. An operating environment for control systems on transputer networks

    NARCIS (Netherlands)

    Tillema, H.G.; Schoute, Albert L.; Wijbrans, K.C.J.; Wijbrans, K.C.J.

    1991-01-01

    The article describes an operating environment for control systems. The environment contains the basic layers of a distributed operating system. The design of this operating environment is based on the requirements demanded by controllers which can be found in complex control systems. Due to the

  7. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  8. Honey characterization using computer vision system and artificial neural networks.

    Science.gov (United States)

    Shafiee, Sahameh; Minaei, Saeid; Moghaddam-Charkari, Nasrollah; Barzegar, Mohsen

    2014-09-15

    This paper reports the development of a computer vision system (CVS) for non-destructive characterization of honey based on colour and its correlated chemical attributes including ash content (AC), antioxidant activity (AA), and total phenolic content (TPC). Artificial neural network (ANN) models were applied to transform RGB values of images to CIE L*a*b* colourimetric measurements and to predict AC, TPC and AA from colour features of images. The developed ANN models were able to convert RGB values to CIE L*a*b* colourimetric parameters with low generalization error of 1.01±0.99. In addition, the developed models for prediction of AC, TPC and AA showed high performance based on colour parameters of honey images, as the R(2) values for prediction were 0.99, 0.98, and 0.87, for AC, AA and TPC, respectively. The experimental results show the effectiveness and possibility of applying CVS for non-destructive honey characterization by the industry. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Study on an Agricultural Environment Monitoring Server System using Wireless Sensor Networks

    OpenAIRE

    Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun

    2010-01-01

    This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected inf...

  10. Next Generation Enterprise Network Business Continuity: Maintaining Operations In A Compromised Environment

    Science.gov (United States)

    2016-03-01

    ENTERPRISE NETWORK BUSINESS CONTINUITY: MAINTAINING OPERATIONS IN A COMPROMISED ENVIRONMENT by Erik C. Hansen March 2016 Thesis Advisor...in mission resilience and mission assurance engineering [2], [24]. This work explores business continuity in a compromised environment ...are commercially available and are developed by mature companies. Cisco offers collaboration tailored to virtual environments as part of Business

  11. The Development and Evaluation of a Computer-Simulated Science Inquiry Environment Using Gamified Elements

    Science.gov (United States)

    Tsai, Fu-Hsing

    2018-01-01

    This study developed a computer-simulated science inquiry environment, called the Science Detective Squad, to engage students in investigating an electricity problem that may happen in daily life. The environment combined the simulation of scientific instruments and a virtual environment, including gamified elements, such as points and a story for…

  12. Bridging context management systems for different types of pervasive computing environments

    NARCIS (Netherlands)

    Hesselman, C.E.W.; Benz, Hartmut; Benz, H.P.; Pawar, P.; Liu, F.; Wegdam, M.; Wibbels, Martin; Broens, T.H.F.; Brok, Jacco

    2008-01-01

    A context management system is a distributed system that enables applications to obtain context information about (mobile) users and forms a key component of any pervasive computing environment. Context management systems are however very environment-specific (e.g., specific for home environments)

  13. Performance Measurements in a High Throughput Computing Environment

    CERN Document Server

    AUTHOR|(CDS)2145966; Gribaudo, Marco

    The IT infrastructures of companies and research centres are implementing new technologies to satisfy the increasing need of computing resources for big data analysis. In this context, resource profiling plays a crucial role in identifying areas where the improvement of the utilisation efficiency is needed. In order to deal with the profiling and optimisation of computing resources, two complementary approaches can be adopted: the measurement-based approach and the model-based approach. The measurement-based approach gathers and analyses performance metrics executing benchmark applications on computing resources. Instead, the model-based approach implies the design and implementation of a model as an abstraction of the real system, selecting only those aspects relevant to the study. This Thesis originates from a project carried out by the author within the CERN IT department. CERN is an international scientific laboratory that conducts fundamental researches in the domain of elementary particle physics. The p...

  14. Man/computer communication in a space environment

    Science.gov (United States)

    Hodges, B. C.; Montoya, G.

    1973-01-01

    The present work reports on a study of the technology required to advance the state of the art in man/machine communications. The study involved the development and demonstration of both hardware and software to effectively implement man/computer interactive channels of communication. While tactile and visual man/computer communications equipment are standard methods of interaction with machines, man's speech is a natural media for inquiry and control. As part of this study, a word recognition unit was developed capable of recognizing a minimum of one hundred different words or sentences in any one of the currently used conversational languages. The study has proven that efficiency in communication between man and computer can be achieved when the vocabulary to be used is structured in a manner compatible with the rigid communication requirements of the machine while at the same time responsive to the informational needs of the man.

  15. Team-computer interfaces in complex task environments

    Energy Technology Data Exchange (ETDEWEB)

    Terranova, M.

    1990-09-01

    This research focused on the interfaces (media of information exchange) teams use to interact about the task at hand. This report is among the first to study human-system interfaces in which the human component is a team, and the system functions as part of the team. Two operators dynamically shared a simulated fluid flow process, coordinating control and failure detection responsibilities through computer-mediated communication. Different computer interfaces representing the same system information were used to affect the individual operators' mental models of the process. Communication was identified as the most critical variable, consequently future research is being designed to test effective modes of communication. The results have relevance for the development of team-computer interfaces in complex systems in which responsibility must be shared dynamically among all members of the operation.

  16. Computational approach in estimating the need of ditch network maintenance

    Science.gov (United States)

    Lauren, Ari; Hökkä, Hannu; Launiainen, Samuli; Palviainen, Marjo; Repo, Tapani; Leena, Finer; Piirainen, Sirpa

    2015-04-01

    Ditch network maintenance (DNM), implemented annually in 70 000 ha area in Finland, is the most controversial of all forest management practices. Nationwide, it is estimated to increase the forest growth by 1…3 million m3 per year, but simultaneously to cause 65 000 tons export of suspended solids and 71 tons of phosphorus (P) to water courses. A systematic approach that allows simultaneous quantification of the positive and negative effects of DNM is required. Excess water in the rooting zone slows the gas exchange and decreases biological activity interfering with the forest growth in boreal forested peatlands. DNM is needed when: 1) the excess water in the rooting zone restricts the forest growth before the DNM, and 2) after the DNM the growth restriction ceases or decreases, and 3) the benefits of DNM are greater than the caused adverse effects. Aeration in the rooting zone can be used as a drainage criterion. Aeration is affected by several factors such as meteorological conditions, tree stand properties, hydraulic properties of peat, ditch depth, and ditch spacing. We developed a 2-dimensional DNM simulator that allows the user to adjust these factors and to evaluate their effect on the soil aeration at different distance from the drainage ditch. DNM simulator computes hydrological processes and soil aeration along a water flowpath between two ditches. Applying daily time step it calculates evapotranspiration, snow accumulation and melt, infiltration, soil water storage, ground water level, soil water content, air-filled porosity and runoff. The model performance in hydrology has been tested against independent high frequency field monitoring data. Soil aeration at different distance from the ditch is computed under steady-state assumption using an empirical oxygen consumption model, simulated air-filled porosity, and diffusion coefficient at different depths in soil. Aeration is adequate and forest growth rate is not limited by poor aeration if the

  17. An Object-Oriented Network-Centric Software Architecture for Physical Computing

    Science.gov (United States)

    Palmer, Richard

    1997-08-01

    Recent developments in object-oriented computer languages and infrastructure such as the Internet, Web browsers, and the like provide an opportunity to define a more productive computational environment for scientific programming that is based more closely on the underlying mathematics describing physics than traditional programming languages such as FORTRAN or C++. In this talk I describe an object-oriented software architecture for representing physical problems that includes classes for such common mathematical objects as geometry, boundary conditions, partial differential and integral equations, discretization and numerical solution methods, etc. In practice, a scientific program written using this architecture looks remarkably like the mathematics used to understand the problem, is typically an order of magnitude smaller than traditional FORTRAN or C++ codes, and hence easier to understand, debug, describe, etc. All objects in this architecture are ``network-enabled,'' which means that components of a software solution to a physical problem can be transparently loaded from anywhere on the Internet or other global network. The architecture is expressed as an ``API,'' or application programmers interface specification, with reference embeddings in Java, Python, and C++. A C++ class library for an early version of this API has been implemented for machines ranging from PC's to the IBM SP2, meaning that phidentical codes run on all architectures.

  18. Automatic detection of emerging threats to computer networks

    CSIR Research Space (South Africa)

    McDonald, A

    2015-10-01

    Full Text Available intrusion detection technology is to detect threats to networked information systems and networking infrastructure in an automated fashion, thereby providing an opportunity to deploy countermeasures. This presentation showcases the research and development...

  19. On network representations of antennas inside resonating environments

    Directory of Open Access Journals (Sweden)

    F. Gronwald

    2007-06-01

    Full Text Available We discuss network representations of dipole antennas within electromagnetic cavities. It is pointed out that for a given configuration these representations are not unique. For an efficient evaluation a network representation should be chosen such that it involves as few network elements as possible. The field theoretical analogue of this circumstance is the possibility to express electromagnetic cavities' Green's functions by representations which exhibit different convergence properties. An explicit example of a dipole antenna within a rectangular cavity clarifies the corresponding interrelation between network theory and electromagnetic field theory. As an application, current spectra are calculated for the case that the antenna is nonlinearly loaded and subject to a two-tone excitation.

  20. Directional Networking in GPS Denied Environments - Time Synchronization

    Science.gov (United States)

    2016-03-14

    Directional Networking in GPS Denied Environments—Time Synchronization Derya Cansever and Gilbert Green Army CERDEC Aberdeen Proving Ground MA...when GPS is not available. We show that the Fast RTSR algorithm allows the entire network to achieve time synchronization with convergence time of...RF-based measurements to synchronize time and measure node range.  Satellite Doppler: Using Doppler measurements from multiple satellites along

  1. Use of medical information by computer networks raises major concerns about privacy.

    OpenAIRE

    OReilly, M.

    1995-01-01

    The development of computer data-bases and long-distance computer networks is leading to improvements in Canada's health care system. However, these developments come at a cost and require a balancing act between access and confidentiality. Columnist Michael OReilly, who in this article explores the security of computer networks, notes that respect for patients' privacy must be given as high a priority as the ability to see their records in the first place.

  2. Integrating Network Awareness in ATLAS Distributed Computing Using the ANSE Project

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Petrosyan, Artem; Batista, Jorge Horacio; Mc Kee, Shawn Patrick

    2015-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networking hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networks using perfSONAR combined with data flow performance metrics further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management, building upon ...

  3. Calculating a checksum with inactive networking components in a computing system

    Science.gov (United States)

    Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T

    2015-01-27

    Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.

  4. Cloud Computing E-Communication Services in the University Environment

    Science.gov (United States)

    Babin, Ron; Halilovic, Branka

    2017-01-01

    The use of cloud computing services has grown dramatically in post-secondary institutions in the last decade. In particular, universities have been attracted to the low-cost and flexibility of acquiring cloud software services from Google, Microsoft and others, to implement e-mail, calendar and document management and other basic office software.…

  5. Active Databases as a Paradigm for Enhanced Computing Environments.

    Science.gov (United States)

    1983-10-01

    Man.Machine Communication, Int. J. Man-Machine Studies, 14, pp.461.488,1981. * [Stallman79] Richard M. Stallman , EMACS: The Extensible, Customizable...Knowledge Representation, Feb 1983 Draft, submitted to IEEE Computer, 18pp. " [Buchanan82] Bruce G. Buchanan, & Richard 0. Duda, Principles Of Rule-Based

  6. User Identification Framework in Social Network Services Environment

    Directory of Open Access Journals (Sweden)

    Brijesh BAKARIYA

    2014-01-01

    Full Text Available Social Network Service is a one of the service where people may communicate with one an-other; and may also exchange messages even of any type of audio or video communication. Social Network Service as name suggests a type of network. Such type of web application plays a dominant role in internet technology. In such type of online community, people may share their common interest. Facebook LinkedIn, orkut and many more are the Social Network Service and it is good medium of making link with people having unique or common interest and goals. But the problem of privacy protection is a big issue in today’s world. As social networking sites allows anonymous users to share information of other stuffs. Due to which cybercrime is also increasing to a rapid extent. In this article we preprocessed the web log data of Social Network Services and assemble that data on the basis of image file format like jpg, jpeg, gif, png, bmp etc. and also propose a framework for victim’s identification.

  7. PEDAGOGICAL EXPERIMENT ON FORMATION OF STUDENT MULTICULTURAL COMPETENCY UNDER CONDITIONS OF COMPUTER-ORIENTED LEARNING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Iryna V. Ivanyuk

    2016-02-01

    Full Text Available The article presents the results of experimental verification of the levels of multicultural competency of students in terms of computer-oriented learning environment. Pedagogical experiment is held by the author in the research "Development of computer-oriented learning environment in the term of multicultural education students in the European Union." It was found the content of "multicultural competency of students" and clarified the basic content of the study "computer-oriented learning environment in terms of multicultural education of pupils." A scientific theories, approaches and principles for the formation of multicultural competence of students in terms of computer-based learning environment identified. Described criteria for evaluation of multicultural competency levels of students in computer-oriented learning environment. It is concluded that the pedagogic al experiment confirmed the working hypothesis of the study: raising the level of multicultural competency formation of students in computer-oriented learning environment can be achieved through pedagogically effective and scientifically justified use of content, form and means of computer-oriented learning environment in multicultural education in the educational process.

  8. Paper, Piles, and Computer Files: Folklore of Information Work Environments.

    Science.gov (United States)

    Neumann, Laura J.

    1999-01-01

    Reviews literature to form a folklore of information workspace and emphasizes the importance of studying folklore of information work environments in the context of the current shift toward removing work from any particular place via information systems, e-mail, and the Web. Discusses trends in workplace design and corporate culture. Contains 84…

  9. Managing computer networks using peer-to-peer technologies

    OpenAIRE

    Granville, Lisandro Zambenedetti; Rosa, Diego Moreira da; Panisson, André; Melchiors, Cristina; Almeida, Maria Janilce Bosquiroli; Tarouco,Liane Margarida Rockenbach

    2005-01-01

    Peer-to-peer systems and network management are usually related to each other because the traffic loads of P2P systems have to be controlled to avoid regular network services becoming unavailable due to network congestion. In this context, from a network operation point of view, P2P systems often mean problems. In this article we take a different perspective and look at P2P technologies as an alternative to improve current network management solutions. We introduce an approach where P2P netwo...

  10. Spacelab data analysis using the space plasma computer analysis network (SCAN) system

    Science.gov (United States)

    Green, J. L.

    1984-01-01

    The Space-plasma Computer Analysis Network (SCAN) currently connects a large number of U.S. Spacelab investigators into a common computer network. Used primarily by plasma physics researchers at present, SCAN provides access to Spacelab investigators in other areas of space science, to Spacelab and non-Spacelab correlative data bases, and to large Class VI computational facilities for modeling. SCAN links computers together at remote institutions used by space researchers, utilizing commercially available software for computer-to-computer communications. Started by the NASA's Office of Space Science in mid 1980, SCAN presently contains ten system nodes located at major universities and space research laboratories, with fourteen new nodes projected for the near future. The Stanford University computer gateways allow SCAN users to connect onto the ARPANET and TELENET overseas networks.

  11. Cooperation in networks where the learning environment differs from the interaction environment.

    Directory of Open Access Journals (Sweden)

    Jianlei Zhang

    Full Text Available We study the evolution of cooperation in a structured population, combining insights from evolutionary game theory and the study of interaction networks. In earlier studies it has been shown that cooperation is difficult to achieve in homogeneous networks, but that cooperation can get established relatively easily when individuals differ largely concerning the number of their interaction partners, such as in scale-free networks. Most of these studies do, however, assume that individuals change their behaviour in response to information they receive on the payoffs of their interaction partners. In real-world situations, subjects do not only learn from their interaction partners, but also from other individuals (e.g. teachers, parents, or friends. Here we investigate the implications of such incongruences between the 'interaction network' and the 'learning network' for the evolution of cooperation in two paradigm examples, the Prisoner's Dilemma game (PDG and the Snowdrift game (SDG. Individual-based simulations and an analysis based on pair approximation both reveal that cooperation will be severely inhibited if the learning network is very different from the interaction network. If the two networks overlap, however, cooperation can get established even in case of considerable incongruence between the networks. The simulations confirm that cooperation gets established much more easily if the interaction network is scale-free rather than random-regular. The structure of the learning network has a similar but much weaker effect. Overall we conclude that the distinction between interaction and learning networks deserves more attention since incongruences between these networks can strongly affect both the course and outcome of the evolution of cooperation.

  12. A FUNCTIONAL MODEL OF COMPUTER-ORIENTED LEARNING ENVIRONMENT OF A POST-DEGREE PEDAGOGICAL EDUCATION

    Directory of Open Access Journals (Sweden)

    Kateryna R. Kolos

    2014-06-01

    Full Text Available The study substantiates the need for a systematic study of the functioning of computer-oriented learning environment of a post-degree pedagogical education; it is determined the definition of “functional model of computer-oriented learning environment of a post-degree pedagogical education”; it is built a functional model of computer-oriented learning environment of a post-degree pedagogical education in accordance with the functions of business, information and communication technology, academic, administrative staff and peculiarities of training courses teachers.

  13. Sentiment analysis and ontology engineering an environment of computational intelligence

    CERN Document Server

    Chen, Shyi-Ming

    2016-01-01

    This edited volume provides the reader with a fully updated, in-depth treatise on the emerging principles, conceptual underpinnings, algorithms and practice of Computational Intelligence in the realization of concepts and implementation of models of sentiment analysis and ontology –oriented engineering. The volume involves studies devoted to key issues of sentiment analysis, sentiment models, and ontology engineering. The book is structured into three main parts. The first part offers a comprehensive and prudently structured exposure to the fundamentals of sentiment analysis and natural language processing. The second part consists of studies devoted to the concepts, methodologies, and algorithmic developments elaborating on fuzzy linguistic aggregation to emotion analysis, carrying out interpretability of computational sentiment models, emotion classification, sentiment-oriented information retrieval, a methodology of adaptive dynamics in knowledge acquisition. The third part includes a plethora of applica...

  14. Integrated Analysis of Environment-driven Operational Effects in Sensor Networks

    Energy Technology Data Exchange (ETDEWEB)

    Park, Alfred J [ORNL; Perumalla, Kalyan S [ORNL

    2007-07-01

    There is a rapidly growing need to evaluate sensor network functionality and performance in the context of the larger environment of infrastructure and applications in which the sensor network is organically embedded. This need, which is motivated by complex applications related to national security operations, leads to a paradigm fundamentally different from that of traditional data networks. In the sensor networks of interest to us, the network dynamics depend strongly on sensor activity, which in turn is triggered by events in the environment. Because the behavior of sensor networks is sensitive to these driving phenomena, the integrity of the sensed observations, measurements and resource usage by the network can widely vary. It is therefore imperative to accurately capture the environmental phenomena, and drive the simulation of the sensor network operation by accounting fully for the environment effects. In this paper, we illustrate the strong, intimate coupling between the sensor network operation and the driving phenomena in their applications with an example sensor network designed to detect and track gaseous plumes.

  15. Collecting data from a sensor network in a single-board computer

    Science.gov (United States)

    Casciati, F.; Casciati, S.; Chen, Z.-C.; Faravelli, L.; Vece, M.

    2015-07-01

    The EU-FP7 project SPARTACUS, currently in progress, sees the international cooperation of several partners toward the design and implementation of a satellite based asset tracking for supporting emergency management in crisis operations. Due to the emergency environment, one has to rely on a low power consumption wireless communication. Therefore, the communication hardware and software must be designed to match requirements which can only be foreseen at the level of more or less likely scenarios. The latter aspect suggests a deep use of a simulator (instead of a real network of sensors) to cover extreme situations. The former power consumption remark suggests the use of a minimal computer (Raspberry Pi) as data collector. In this paper, the results of a broad simulation campaign are reported in order to investigate the accuracy of the received data and the global power consumption for each of the considered scenarios.

  16. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Temi Linjewile; Mike Maguire; Adel Sarofim; Connie Senior; Changguan Yang; Hong-Shig Shim

    2004-04-28

    This is the fourteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused primarily on completing a prototype detachable user interface for the framework and on integrating Carnegie Mellon Universities IECM model core with the computational engine. In addition to this work, progress has been made on several other development and modeling tasks for the program. These include: (1) improvements to the infrastructure code of the computational engine, (2) enhancements to the model interfacing specifications, (3) additional development to increase the robustness of all framework components, (4) enhanced coupling of the computational and visualization engine components, (5) a series of detailed simulations studying the effects of gasifier inlet conditions on the heat flux to the gasifier injector, and (6) detailed plans for implementing models for mercury capture for both warm and cold gas cleanup have been created.

  17. Low Power Multi-Hop Networking Analysis in Intelligent Environments

    Directory of Open Access Journals (Sweden)

    Josu Etxaniz

    2017-05-01

    Full Text Available Intelligent systems are driven by the latest technological advances in many different areas such as sensing, embedded systems, wireless communications or context recognition. This paper focuses on some of those areas. Concretely, the paper deals with wireless communications issues in embedded systems. More precisely, the paper combines the multi-hop networking with Bluetooth technology and a quality of service (QoS metric, the latency. Bluetooth is a radio license-free worldwide communication standard that makes low power multi-hop wireless networking available. It establishes piconets (point-to-point and point-to-multipoint links and scatternets (multi-hop networks. As a result, many Bluetooth nodes can be interconnected to set up ambient intelligent networks. Then, this paper presents the results of the investigation on multi-hop latency with park and sniff Bluetooth low power modes conducted over the hardware test bench previously implemented. In addition, the empirical models to estimate the latency of multi-hop communications over Bluetooth Asynchronous Connectionless Links (ACL in park and sniff mode are given. The designers of devices and networks for intelligent systems will benefit from the estimation of the latency in Bluetooth multi-hop communications that the models provide.

  18. Low Power Multi-Hop Networking Analysis in Intelligent Environments.

    Science.gov (United States)

    Etxaniz, Josu; Aranguren, Gerardo

    2017-05-19

    Intelligent systems are driven by the latest technological advances in many different areas such as sensing, embedded systems, wireless communications or context recognition. This paper focuses on some of those areas. Concretely, the paper deals with wireless communications issues in embedded systems. More precisely, the paper combines the multi-hop networking with Bluetooth technology and a quality of service (QoS) metric, the latency. Bluetooth is a radio license-free worldwide communication standard that makes low power multi-hop wireless networking available. It establishes piconets (point-to-point and point-to-multipoint links) and scatternets (multi-hop networks). As a result, many Bluetooth nodes can be interconnected to set up ambient intelligent networks. Then, this paper presents the results of the investigation on multi-hop latency with park and sniff Bluetooth low power modes conducted over the hardware test bench previously implemented. In addition, the empirical models to estimate the latency of multi-hop communications over Bluetooth Asynchronous Connectionless Links (ACL) in park and sniff mode are given. The designers of devices and networks for intelligent systems will benefit from the estimation of the latency in Bluetooth multi-hop communications that the models provide.

  19. Syntactic computations in the language network: Characterising dynamic network properties using representational similarity analysis

    Directory of Open Access Journals (Sweden)

    Lorraine Komisarjevsky Tyler

    2013-05-01

    Full Text Available The core human capacity of syntactic analysis involves a left hemisphere network involving left inferior frontal gyrus (LIFG and posterior middle temporal gyrus (LMTG and the anatomical connections between them. Here we use MEG to determine the spatio-temporal properties of syntactic computations in this network. Listeners heard spoken sentences containing a local syntactic ambiguity (e.g. …landing planes…, at the offset of which they heard a disambiguating verb and decided whether it was an acceptable/unacceptable continuation of the sentence. We charted the time-course of processing and resolving syntactic ambiguity by measuring MEG responses from the onset of each word in the ambiguous phrase and the disambiguating word. We used representational similarity analysis (RSA to characterize syntactic information represented in the LIFG and LpMTG over time and to investigate their relationship to each other. Testing a variety of lexico-syntactic and ambiguity models against the MEG data, our results suggest early lexico-syntactic responses in the LpMTG and later effects of ambiguity in the LIFG, pointing to a clear differentiation in the functional roles of these two regions. Our results suggest the LpMTG represents and transmits lexical information to the LIFG, which responds to and resolves the ambiguity.

  20. Computing of network tenacity based on modified binary particle swarm optimization algorithm

    Science.gov (United States)

    Shen, Maoxing; Sun, Chengyu

    2017-05-01

    For rapid calculation of network node tenacity, which can depict the invulnerability performance of network, this paper designs a computational method based on modified binary particle swarm optimization (BPSO) algorithm. Firstly, to improve the astringency of the BPSO algorithm, the algorithm adopted an improved bit transfer probability function and location updating formula. Secondly, algorithm for fitness function value of BPSO based on the breadth-first search is designed. Thirdly, the computing method for network tenacity based on the modified BPSO algorithm is presented. Results of experiment conducted in the Advanced Research Project Agency (ARPA) network and Tactical Support Communication (TCS) network illustrate that the computing method is impactful and high-performance to calculate network tenacity.