WorldWideScience

Sample records for network computing 04-q1

  1. computer networks

    Directory of Open Access Journals (Sweden)

    N. U. Ahmed

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  2. Introduction to computer networking

    CERN Document Server

    Robertazzi, Thomas G

    2017-01-01

    This book gives a broad look at both fundamental networking technology and new areas that support it and use it. It is a concise introduction to the most prominent, recent technological topics in computer networking. Topics include network technology such as wired and wireless networks, enabling technologies such as data centers, software defined networking, cloud and grid computing and applications such as networks on chips, space networking and network security. The accessible writing style and non-mathematical treatment makes this a useful book for the student, network and communications engineer, computer scientist and IT professional. • Features a concise, accessible treatment of computer networking, focusing on new technological topics; • Provides non-mathematical introduction to networks in their most common forms today;< • Includes new developments in switching, optical networks, WiFi, Bluetooth, LTE, 5G, and quantum cryptography.

  3. Complex networks and computing

    Institute of Scientific and Technical Information of China (English)

    Shuigeng ZHOU; Zhongzhi ZHANG

    2009-01-01

    @@ Nowadays complex networks are pervasive in various areas of science and technology. Popular examples of complex networks include the Internet, social networks of collaboration, citations and co-authoring, as well as biological networks such as gene and protein interactions and others. Complex networks research spans across mathematics, computer science, engineering, biology and the social sciences. Even in computer science area, increasing problems are either found to be related to complex networks or studied from the perspective of complex networks, such as searching on Web and P2P networks, routing in sensor networks, language processing, software engineering etc. The interaction and mergence of complex networks and computing is inspiring new chances and challenges in computer science.

  4. Basics of Computer Networking

    CERN Document Server

    Robertazzi, Thomas

    2012-01-01

    Springer Brief Basics of Computer Networking provides a non-mathematical introduction to the world of networks. This book covers both technology for wired and wireless networks. Coverage includes transmission media, local area networks, wide area networks, and network security. Written in a very accessible style for the interested layman by the author of a widely used textbook with many years of experience explaining concepts to the beginner.

  5. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  6. Mobile networks and computing

    CERN Document Server

    Rajasekaran, Sanguthevar; Hsu, D Frank

    2000-01-01

    Advances in the technologies of networking, wireless communications, and miniaturization of computers have lead to rapid development in mobile communication infrastructure and have engendered a new paradigm of computing. Users carrying portable devices can now move freely about while remaining connected to the network. This "portability" allows for access to information from anywhere and at any time. The flexibility has resulted in new levels of complexity not encountered previously in software and protocol design for wired networking. New challenges in designing software systems for mobile networks include location and mobility management, channel allocation, power conservation, and more. In this book, renowned researchers in the field address these aspects of mobile networking.

  7. Computer-communication networks

    CERN Document Server

    Meditch, James S

    1983-01-01

    Computer- Communication Networks presents a collection of articles the focus of which is on the field of modeling, analysis, design, and performance optimization. It discusses the problem of modeling the performance of local area networks under file transfer. It addresses the design of multi-hop, mobile-user radio networks. Some of the topics covered in the book are the distributed packet switching queuing network design, some investigations on communication switching techniques in computer networks and the minimum hop flow assignment and routing subject to an average message delay constraint

  8. Hyperswitch Communication Network Computer

    Science.gov (United States)

    Peterson, John C.; Chow, Edward T.; Priel, Moshe; Upchurch, Edwin T.

    1993-01-01

    Hyperswitch Communications Network (HCN) computer is prototype multiple-processor computer being developed. Incorporates improved version of hyperswitch communication network described in "Hyperswitch Network For Hypercube Computer" (NPO-16905). Designed to support high-level software and expansion of itself. HCN computer is message-passing, multiple-instruction/multiple-data computer offering significant advantages over older single-processor and bus-based multiple-processor computers, with respect to price/performance ratio, reliability, availability, and manufacturing. Design of HCN operating-system software provides flexible computing environment accommodating both parallel and distributed processing. Also achieves balance among following competing factors; performance in processing and communications, ease of use, and tolerance of (and recovery from) faults.

  9. Computer networks forensics

    Directory of Open Access Journals (Sweden)

    Ratomir Đ. Đokić

    2013-02-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 Digital forensics is a set of scientific methods and procedures for collection, analysis and presentation of evidence that can be found on the computers, servers, computer networks, databases, mobile devices, as well as all other devices on which can store (save data. Digital forensics, computer networks is an examination of digital evidence that can be found on servers and user devices, which are exchanged internal or external communication through local or public networks. Also there is a need for identifying sites and modes of origin messages, establish user identification, and detection types of manipulation by logging in to your account. This paper presents the basic elements of computer networks, software used to communicate and describe the methods of collecting digital evidence and their analysis.

  10. Computers, Networks and Work.

    Science.gov (United States)

    Sproull, Lee; Kiesler, Sara

    1991-01-01

    Discussed are how computer networks can affect the nature of work and the relationships between managers and employees. The differences between face-to-face exchanges and electronic interactions are described. (KR)

  11. Computer network programming

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, J.Y. [California Polytechnic State Univ., San Luis Obispo, CA (United States)

    1996-12-31

    The programs running on a computer network can be divided into two parts, the Network Operating System and the user applications. Any high level language translator, such as C, JAVA, BASIC, FORTRAN, or COBOL, runs under NOS as a programming tool to produce network application programs or software. Each application program while running on the network provides the human user with network application services, such as remote data base search, retrieval, etc. The Network Operating System should provide a simple and elegant system interface to all the network application programs. This programming interface may request the Transport layer services on behalf of a network application program. The primary goals are to achieve programming convenience, and to avoid complexity. In a 5-layer network model, the system interface is comprised of a group of system calls which are collectively known as the session layer with its own Session Protocol Data Units. This is a position paper discussing the basic system primitives which reside between a network application program and the Transport layer, and a programming example of using such primitives.

  12. Enlightenment on Computer Network Reliability From Transportation Network Reliability

    OpenAIRE

    Hu Wenjun; Zhou Xizhao

    2011-01-01

    Referring to transportation network reliability problem, five new computer network reliability definitions are proposed and discussed. They are computer network connectivity reliability, computer network time reliability, computer network capacity reliability, computer network behavior reliability and computer network potential reliability. Finally strategies are suggested to enhance network reliability.

  13. Computer Networks and Globalization

    Directory of Open Access Journals (Sweden)

    J. Magliaro

    2007-07-01

    Full Text Available Communication and information computer networks connect the world in ways that make globalization more natural and inequity more subtle. As educators, we look at these phenomena holistically analyzing them from the realist’s view, thus exploring tensions, (in equity and (injustice, and from the idealist’s view, thus embracing connectivity, convergence and development of a collective consciousness. In an increasingly market- driven world we find examples of openness and human generosity that are based on networks, specifically the Internet. After addressing open movements in publishing, software industry and education, we describe the possibility of a dialectic equilibrium between globalization and indigenousness in view of ecologically designed future smart networks

  14. Computer Networks and Networking: A Primer.

    Science.gov (United States)

    Collins, Mauri P.

    1993-01-01

    Provides a basic introduction to computer networks and networking terminology. Topics addressed include modems; the Internet; TCP/IP (Transmission Control Protocol/Internet Protocol); transmission lines; Internet Protocol numbers; network traffic; Fidonet; file transfer protocol (FTP); TELNET; electronic mail; discussion groups; LISTSERV; USENET;…

  15. Computing networks from cluster to cloud computing

    CERN Document Server

    Vicat-Blanc, Pascale; Guillier, Romaric; Soudan, Sebastien

    2013-01-01

    "Computing Networks" explores the core of the new distributed computing infrastructures we are using today:  the networking systems of clusters, grids and clouds. It helps network designers and distributed-application developers and users to better understand the technologies, specificities, constraints and benefits of these different infrastructures' communication systems. Cloud Computing will give the possibility for millions of users to process data anytime, anywhere, while being eco-friendly. In order to deliver this emerging traffic in a timely, cost-efficient, energy-efficient, and

  16. INFORMATION SECURITY IN COMPUTER NETWORKS

    OpenAIRE

    Мехед, Д. Б.

    2016-01-01

    The article deals with computer networks, types of construction, the analysis of the advantages and disadvantages of different types of networks. The basic types of information transmission, highlighted their advantages and disadvantages, losing information and methods of protection.

  17. Eradicating Computer Viruses on Networks

    CERN Document Server

    Huang, Jinyu

    2012-01-01

    Spread of computer viruses can be modeled as the SIS (susceptible-infected-susceptible) epidemic propagation. We show that in order to ensure the random immunization or the targeted immunization effectively prevent computer viruses propagation on homogeneous networks, we should install antivirus programs in every computer node and frequently update those programs. This may produce large work and cost to install and update antivirus programs. Then we propose a new policy called "network monitors" to tackle this problem. In this policy, we only install and update antivirus programs for small number of computer nodes, namely the "network monitors". Further, the "network monitors" can monitor their neighboring nodes' behavior. This mechanism incur relative small cost to install and update antivirus programs.We also indicate that the policy of the "network monitors" is efficient to protect the network's safety. Numerical simulations confirm our analysis.

  18. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  19. Understanding and designing computer networks

    CERN Document Server

    King, Graham

    1995-01-01

    Understanding and Designing Computer Networks considers the ubiquitous nature of data networks, with particular reference to internetworking and the efficient management of all aspects of networked integrated data systems. In addition it looks at the next phase of networking developments; efficiency and security are covered in the sections dealing with data compression and data encryption; and future examples of network operations, such as network parallelism, are introduced.A comprehensive case study is used throughout the text to apply and illustrate new techniques and concepts as th

  20. A quantum computer network

    CERN Document Server

    Kesidis, George

    2009-01-01

    Wong's diffusion network is a stochastic, zero-input Hopfield network with a Gibbs stationary distribution over a bounded, connected continuum. Previously, logarithmic thermal annealing was demonstrated for the diffusion network and digital versions of it were studied and applied to imaging. Recently, "quantum" annealed Markov chains have garnered significant attention because of their improved performance over "pure" thermal annealing. In this note, a joint quantum and thermal version of Wong's diffusion network is described and its convergence properties are studied. Different choices for "auxiliary" functions are discussed, including those of the kinetic type previously associated with quantum annealing.

  1. Computing with Spiking Neuron Networks

    NARCIS (Netherlands)

    Paugam-Moisy, H.; Bohte, S.M.; Rozenberg, G.; Baeck, T.H.W.; Kok, J.N.

    2012-01-01

    Abstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions between neu

  2. Analysis of computer networks

    CERN Document Server

    Gebali, Fayez

    2015-01-01

    This textbook presents the mathematical theory and techniques necessary for analyzing and modeling high-performance global networks, such as the Internet. The three main building blocks of high-performance networks are links, switching equipment connecting the links together, and software employed at the end nodes and intermediate switches. This book provides the basic techniques for modeling and analyzing these last two components. Topics covered include, but are not limited to: Markov chains and queuing analysis, traffic modeling, interconnection networks and switch architectures and buffering strategies.   ·         Provides techniques for modeling and analysis of network software and switching equipment; ·         Discusses design options used to build efficient switching equipment; ·         Includes many worked examples of the application of discrete-time Markov chains to communication systems; ·         Covers the mathematical theory and techniques necessary for ana...

  3. Computational Social Network Analysis

    CERN Document Server

    Hassanien, Aboul-Ella

    2010-01-01

    Presents insight into the social behaviour of animals (including the study of animal tracks and learning by members of the same species). This book provides web-based evidence of social interaction, perceptual learning, information granulation and the behaviour of humans and affinities between web-based social networks

  4. Computer Network Security- The Challenges of Securing a Computer Network

    Science.gov (United States)

    Scotti, Vincent, Jr.

    2011-01-01

    This article is intended to give the reader an overall perspective on what it takes to design, implement, enforce and secure a computer network in the federal and corporate world to insure the confidentiality, integrity and availability of information. While we will be giving you an overview of network design and security, this article will concentrate on the technology and human factors of securing a network and the challenges faced by those doing so. It will cover the large number of policies and the limits of technology and physical efforts to enforce such policies.

  5. Collective network for computer structures

    Science.gov (United States)

    Blumrich, Matthias A; Coteus, Paul W; Chen, Dong; Gara, Alan; Giampapa, Mark E; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd E; Steinmacher-Burow, Burkhard D; Vranas, Pavlos M

    2014-01-07

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to the needs of a processing algorithm.

  6. Collective network for computer structures

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Coteus, Paul W. (Yorktown Heights, NY); Chen, Dong (Croton On Hudson, NY); Gara, Alan (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Heidelberger, Philip (Cortlandt Manor, NY); Hoenicke, Dirk (Ossining, NY); Takken, Todd E. (Brewster, NY); Steinmacher-Burow, Burkhard D. (Wernau, DE); Vranas, Pavlos M. (Bedford Hills, NY)

    2011-08-16

    A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

  7. Computing on Anonymous Quantum Network

    CERN Document Server

    Kobayashi, Hirotada; Tani, Seiichiro

    2010-01-01

    This paper considers distributed computing on an anonymous quantum network, a network in which no party has a unique identifier and quantum communication and computation are available. It is proved that the leader election problem can exactly (i.e., without error in bounded time) be solved with at most the same complexity up to a constant factor as that of exactly computing symmetric functions (without intermediate measurements for a distributed and superposed input), if the number of parties is given to every party. A corollary of this result is a more efficient quantum leader election algorithm than existing ones: the new quantum algorithm runs in O(n) rounds with bit complexity O(mn^2), on an anonymous quantum network with n parties and m communication links. Another corollary is the first quantum algorithm that exactly computes any computable Boolean function with round complexity O(n) and with smaller bit complexity than that of existing classical algorithms in the worst case over all (computable) Boolea...

  8. Quantum computing in neural networks

    CERN Document Server

    Gralewicz, P

    2004-01-01

    According to the statistical interpretation of quantum theory, quantum computers form a distinguished class of probabilistic machines (PMs) by encoding n qubits in 2n pbits. This raises the possibility of a large-scale quantum computing using PMs, especially with neural networks which have the innate capability for probabilistic information processing. Restricting ourselves to a particular model, we construct and numerically examine the performance of neural circuits implementing universal quantum gates. A discussion on the physiological plausibility of proposed coding scheme is also provided.

  9. Snowmass 2013 Computing Frontier: Networking

    CERN Document Server

    Bell, Gregory

    2013-01-01

    Computing has become a major component of all particle physics experiments and in many areas of theoretical particle physics. Progress in HEP experiment and theory will require significantly more computing, software development, storage, and networking, with different projects stretching future capabilities in different ways. However, there are many common needs among different areas in HEP, so more community planning is advised to increase efficiency. Careful and continuing review of the topics we studied, i.e., user needs and capabilities of current and future technology, is needed.

  10. Markov Networks in Evolutionary Computation

    CERN Document Server

    Shakya, Siddhartha

    2012-01-01

    Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs).  EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...

  11. Personal computer local networks report

    CERN Document Server

    1991-01-01

    Please note this is a Short Discount publication. Since the first microcomputer local networks of the late 1970's and early 80's, personal computer LANs have expanded in popularity, especially since the introduction of IBMs first PC in 1981. The late 1980s has seen a maturing in the industry with only a few vendors maintaining a large share of the market. This report is intended to give the reader a thorough understanding of the technology used to build these systems ... from cable to chips ... to ... protocols to servers. The report also fully defines PC LANs and the marketplace, with in-

  12. Delayed Commutation in Quantum Computer Networks

    Science.gov (United States)

    García-Escartín, Juan Carlos; Chamorro-Posada, Pedro

    2006-09-01

    In the same way that classical computer networks connect and enhance the capabilities of classical computers, quantum networks can combine the advantages of quantum information and communication. We propose a nonclassical network element, a delayed commutation switch, that can solve the problem of switching time in packet switching networks. With the help of some local ancillary qubits and superdense codes, we can route a qubit packet after part of it has left the network node.

  13. Delayed commutation in quantum computer networks

    CERN Document Server

    Garcia-Escartin, J C; Chamorro-Posada, Pedro; Garcia-Escartin, Juan Carlos

    2005-01-01

    In the same way that classical computer networks connect and enhance the capabilities of classical computers, quantum networks can combine the advantages of quantum information and communications. We propose a non-classical network element, a delayed commutation switch, that can solve the problem of switching time in packet switching networks. With the help of some local ancillary qubits and superdense codes we can route the information after part of it has left the network node.

  14. Computational social networks security and privacy

    CERN Document Server

    2012-01-01

    Presents the latest advances in security and privacy issues in computational social networks, and illustrates how both organizations and individuals can be protected from real-world threats Discusses the design and use of a wide range of computational tools and software for social network analysis Provides experience reports, survey articles, and intelligence techniques and theories relating to specific problems in network technology

  15. The university computer network security system

    Institute of Scientific and Technical Information of China (English)

    张丁欣

    2012-01-01

    With the development of the times, advances in technology, computer network technology has been deep into all aspects of people's lives, it plays an increasingly important role, is an important tool for information exchange. Colleges and universities is to cultivate the cradle of new technology and new technology, computer network Yulu nectar to nurture emerging technologies, and so, as institutions of higher learning should pay attention to the construction of computer network security system.

  16. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  17. Products and Services for Computer Networks.

    Science.gov (United States)

    Negroponte, Nicholas P.

    1991-01-01

    Creative applications of computer networks are discussed. Products and services of the future that come from imaginative applications of both channel and computing capacity are described. The topics of entertainment, transactions, and electronic personal surrogates are included. (KR)

  18. Adaptive computational resource allocation for sensor networks

    Institute of Scientific and Technical Information of China (English)

    WANG Dian-hong; FEI E; YAN Yu-jie

    2008-01-01

    To efficiently utilize the limited computational resource in real-time sensor networks, this paper focu-ses on the challenge of computational resource allocation in sensor networks and provides a solution with the method of economies. It designs a mieroeconomic system in which the applications distribute their computational resource consumption across sensor networks by virtue of mobile agent. Further, it proposes the market-based computational resource allocation policy named MCRA which satisfies the uniform consumption of computational energy in network and the optimal division of the single computational capacity for multiple tasks. The simula-tion in the scenario of target tracing demonstrates that MCRA realizes an efficient allocation of computational re-sources according to the priority of tasks, achieves the superior allocation performance and equilibrium perform-ance compared to traditional allocation policies, and ultimately prolongs the system lifetime.

  19. Computing preimages of Boolean networks

    Science.gov (United States)

    2013-01-01

    In this paper we present an algorithm based on the sum-product algorithm that finds elements in the preimage of a feed-forward Boolean networks given an output of the network. Our probabilistic method runs in linear time with respect to the number of nodes in the network. We evaluate our algorithm for randomly constructed Boolean networks and a regulatory network of Escherichia coli and found that it gives a valid solution in most cases. PMID:24267277

  20. Mobile Agents in Networking and Distributed Computing

    CERN Document Server

    Cao, Jiannong

    2012-01-01

    The book focuses on mobile agents, which are computer programs that can autonomously migrate between network sites. This text introduces the concepts and principles of mobile agents, provides an overview of mobile agent technology, and focuses on applications in networking and distributed computing.

  1. Automated classification of computer network attacks

    CSIR Research Space (South Africa)

    Van Heerden, R

    2013-11-01

    Full Text Available In this paper we demonstrate how an automated reasoner, HermiT, is used to classify instances of computer network based attacks in conjunction with a network attack ontology. The ontology describes different types of network attacks through classes...

  2. Integrating network awareness in ATLAS distributed computing

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Maeno, T; Mckee, S; Nilsson, P; Petrosyan, A; Vukotic, I; Wenaus, T

    2014-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networks hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networking and data flow performance further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management.

  3. Airlines Network Optimization using Evolutionary Computation

    Science.gov (United States)

    Inoue, Hiroki; Kato, Yasuhiko; Sakagami, Tomoya

    In recent years, various networks have come to exist in our surroundings. Not only the internet and airline routes can be thought of as networks: protein interactions are also networks. An “economic network design problem” can be discussed by assuming that a vertex is an economic player and that a link represents some connection between economic players. In this paper, the Airlines network is taken up as an example of an “economic network design problem”, and the Airlines network which the profit of the entire Airlines industry is maximized is clarified. The Airlines network is modeled based on connections models proposed by Jackson and Wolinsky, and the utility function of the network is defined. In addition, the optimization simulation using the evolutionary computation is shown for a domestic airline in Japan.

  4. Computational network design from functional specifications

    KAUST Repository

    Peng, Chi Han

    2016-07-11

    Connectivity and layout of underlying networks largely determine agent behavior and usage in many environments. For example, transportation networks determine the flow of traffic in a neighborhood, whereas building floorplans determine the flow of people in a workspace. Designing such networks from scratch is challenging as even local network changes can have large global effects. We investigate how to computationally create networks starting from only high-level functional specifications. Such specifications can be in the form of network density, travel time versus network length, traffic type, destination location, etc. We propose an integer programming-based approach that guarantees that the resultant networks are valid by fulfilling all the specified hard constraints and that they score favorably in terms of the objective function. We evaluate our algorithm in two different design settings, street layout and floorplans to demonstrate that diverse networks can emerge purely from high-level functional specifications.

  5. Queuing theory models for computer networks

    Science.gov (United States)

    Galant, David C.

    1989-01-01

    A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.

  6. Parallel computing and networking; Heiretsu keisanki to network

    Energy Technology Data Exchange (ETDEWEB)

    Asakawa, E.; Tsuru, T. [Japan National Oil Corp., Tokyo (Japan); Matsuoka, T. [Japan Petroleum Exploration Co. Ltd., Tokyo (Japan)

    1996-05-01

    This paper describes the trend of parallel computers used in geophysical exploration. Around 1993 was the early days when the parallel computers began to be used for geophysical exploration. Classification of these computers those days was mainly MIMD (multiple instruction stream, multiple data stream), SIMD (single instruction stream, multiple data stream) and the like. Parallel computers were publicized in the 1994 meeting of the Geophysical Exploration Society as a `high precision imaging technology`. Concerning the library of parallel computers, there was a shift to PVM (parallel virtual machine) in 1993 and to MPI (message passing interface) in 1995. In addition, the compiler of FORTRAN90 was released with support implemented for data parallel and vector computers. In 1993, networks used were Ethernet, FDDI, CDDI and HIPPI. In 1995, the OC-3 products under ATM began to propagate. However, ATM remains to be an interoffice high speed network because the ATM service has not spread yet for the public network. 1 ref.

  7. Scaling in Computer Network Traffic

    Science.gov (United States)

    2007-11-02

    Laboratory for Applied Network Research). ♠ CAIDA (Cooperative Association for Internet Data Analysis). ♥ ♠ WAND (Waikato Applied Network Dynamics [DAG...permission of CAIDA , c© 2001 CAIDA /UC Regents. Mapnet Author: Bradley Huffaker. 15 Flows and Packets Flows are sets of packets associated to the same data

  8. Computer network environment planning and analysis

    Science.gov (United States)

    Dalphin, John F.

    1989-01-01

    The GSFC Computer Network Environment provides a broadband RF cable between campus buildings and ethernet spines in buildings for the interlinking of Local Area Networks (LANs). This system provides terminal and computer linkage among host and user systems thereby providing E-mail services, file exchange capability, and certain distributed computing opportunities. The Environment is designed to be transparent and supports multiple protocols. Networking at Goddard has a short history and has been under coordinated control of a Network Steering Committee for slightly more than two years; network growth has been rapid with more than 1500 nodes currently addressed and greater expansion expected. A new RF cable system with a different topology is being installed during summer 1989; consideration of a fiber optics system for the future will begin soon. Summmer study was directed toward Network Steering Committee operation and planning plus consideration of Center Network Environment analysis and modeling. Biweekly Steering Committee meetings were attended to learn the background of the network and the concerns of those managing it. Suggestions for historical data gathering have been made to support future planning and modeling. Data Systems Dynamic Simulator, a simulation package developed at NASA and maintained at GSFC was studied as a possible modeling tool for the network environment. A modeling concept based on a hierarchical model was hypothesized for further development. Such a model would allow input of newly updated parameters and would provide an estimation of the behavior of the network.

  9. Computer networking a top-down approach

    CERN Document Server

    Kurose, James

    2017-01-01

    Unique among computer networking texts, the Seventh Edition of the popular Computer Networking: A Top Down Approach builds on the author’s long tradition of teaching this complex subject through a layered approach in a “top-down manner.” The text works its way from the application layer down toward the physical layer, motivating readers by exposing them to important concepts early in their study of networking. Focusing on the Internet and the fundamentally important issues of networking, this text provides an excellent foundation for readers interested in computer science and electrical engineering, without requiring extensive knowledge of programming or mathematics. The Seventh Edition has been updated to reflect the most important and exciting recent advances in networking.

  10. Current Computer Network Security Issues/Threats

    National Research Council Canada - National Science Library

    Ammar Yassir; Alaa A K Ismaeel

    2016-01-01

    Computer network security has been a subject of concern for a long period. Many efforts have been made to address the existing and emerging threats such as viruses and Trojan among others without any significant success...

  11. Low Computational Complexity Network Coding For Mobile Networks

    DEFF Research Database (Denmark)

    Heide, Janus

    2012-01-01

    Network Coding (NC) is a technique that can provide benefits in many types of networks, some examples from wireless networks are: In relay networks, either the physical or the data link layer, to reduce the number of transmissions. In reliable multicast, to reduce the amount of signaling and enable...... cooperation among receivers. In meshed networks, to simplify routing schemes and to increase robustness toward node failures. This thesis deals with implementation issues of one NC technique namely Random Linear Network Coding (RLNC) which can be described as a highly decentralized non-deterministic intra......-flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...

  12. Networked Computing in Wireless Sensor Networks for Structural Health Monitoring

    CERN Document Server

    Jindal, Apoorva

    2010-01-01

    This paper studies the problem of distributed computation over a network of wireless sensors. While this problem applies to many emerging applications, to keep our discussion concrete we will focus on sensor networks used for structural health monitoring. Within this context, the heaviest computation is to determine the singular value decomposition (SVD) to extract mode shapes (eigenvectors) of a structure. Compared to collecting raw vibration data and performing SVD at a central location, computing SVD within the network can result in significantly lower energy consumption and delay. Using recent results on decomposing SVD, a well-known centralized operation, into components, we seek to determine a near-optimal communication structure that enables the distribution of this computation and the reassembly of the final results, with the objective of minimizing energy consumption subject to a computational delay constraint. We show that this reduces to a generalized clustering problem; a cluster forms a unit on w...

  13. Computer Networks and African Studies Centers.

    Science.gov (United States)

    Kuntz, Patricia S.

    The use of electronic communication in the 12 Title VI African Studies Centers is discussed, and the networks available for their use are reviewed. It is argued that the African Studies Centers should be on the cutting edge of contemporary electronic communication and that computer networks should be a fundamental aspect of their programs. An…

  14. Computational social networks tools, perspectives and applications

    CERN Document Server

    Abraham, Ajith

    2012-01-01

    Provides the latest advances in computational social networks, and illustrates how organizations can gain a competitive advantage by applying these ideas in real-world scenarios Presents a specific focus on practical tools and applications Provides experience reports, survey articles, and intelligence techniques and theories relating to specific problems in network technology

  15. Integrating Network Management for Cloud Computing Services

    Science.gov (United States)

    2015-06-01

    azure.microsoft.com/. 114 [16] Microsoft Azure ExpressRoute. http://azure.microsoft.com/en-us/ services/expressroute/. [17] Mobility and Networking...Marc Lobelle. The NAROS Ap- proach for IPv6 Multihoming with Traffic Engineering. In Quality for All, pages 112–121. Springer, 2003. [57] Jeffrey Dean...Networking Technologies, Services, and Protocols; Performance of Computer and Commu- nication Networks; Mobile and Wireless Communications Systems

  16. Autonomic computing enabled cooperative networked design

    CERN Document Server

    Wodczak, Michal

    2014-01-01

    This book introduces the concept of autonomic computing driven cooperative networked system design from an architectural perspective. As such it leverages and capitalises on the relevant advancements in both the realms of autonomic computing and networking by welding them closely together. In particular, a multi-faceted Autonomic Cooperative System Architectural Model is defined which incorporates the notion of Autonomic Cooperative Behaviour being orchestrated by the Autonomic Cooperative Networking Protocol of a cross-layer nature. The overall proposed solution not only advocates for the inc

  17. Architecture Design & Network Application of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Mehzabul Hoque Nahid

    2015-08-01

    Full Text Available “Cloud” computing a comparatively term, stands on decades of research & analysis in virtualization, analytical distributed computing, utility computing, and more recently computer networking, web technology and software services. Cloud computing represents a shift away from computing as a product that is purchased, to computing as a service that is delivered to consumers over the internet from large-scale data centers – or “clouds”. Whilst cloud computing is obtaining growing popularity in the IT industry, academic appeared to be lagging behind the developments in this field. It also implies a service oriented designed architecture, reduced information technology overhead for the end-user, good flexibility, reduced total cost of private ownership, on-demand services and many other things. This paper discusses the concept of “cloud” computing, some of the issues it tries to address, related research topics, and a “cloud” implementation available today

  18. Spontaneous ad hoc mobile cloud computing network.

    Science.gov (United States)

    Lacuesta, Raquel; Lloret, Jaime; Sendra, Sandra; Peñalver, Lourdes

    2014-01-01

    Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes.

  19. Algorithms and networking for computer games

    CERN Document Server

    Smed, Jouni

    2006-01-01

    Algorithms and Networking for Computer Games is an essential guide to solving the algorithmic and networking problems of modern commercial computer games, written from the perspective of a computer scientist. Combining algorithmic knowledge and game-related problems, the authors discuss all the common difficulties encountered in game programming. The first part of the book tackles algorithmic problems by presenting how they can be solved practically. As well as ""classical"" topics such as random numbers, tournaments and game trees, the authors focus on how to find a path in, create the terrai

  20. Computer network time synchronization the network time protocol

    CERN Document Server

    Mills, David L

    2006-01-01

    What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol

  1. Social networks a framework of computational intelligence

    CERN Document Server

    Chen, Shyi-Ming

    2014-01-01

    This volume provides the audience with an updated, in-depth and highly coherent material on the conceptually appealing and practically sound information technology of Computational Intelligence applied to the analysis, synthesis and evaluation of social networks. The volume involves studies devoted to key issues of social networks including community structure detection in networks, online social networks, knowledge growth and evaluation, and diversity of collaboration mechanisms.  The book engages a wealth of methods of Computational Intelligence along with well-known techniques of linear programming, Formal Concept Analysis, machine learning, and agent modeling.  Human-centricity is of paramount relevance and this facet manifests in many ways including personalized semantics, trust metric, and personal knowledge management; just to highlight a few of these aspects. The contributors to this volume report on various essential applications including cyber attacks detection, building enterprise social network...

  2. Requirement emergence computation of networked software

    Institute of Scientific and Technical Information of China (English)

    HE Keqing; LIANG Peng; PENG Rong; LI Bing; LIU Jing

    2007-01-01

    Emergence Computation has become a hot topic in the research of complex systems in recent years.With the substantial increase in scale and complexity of network-based information systems,the uncertain user requirements from the Internet and personalized application requirement result in the frequent change for the software requirement.Meanwhile,the software system with non self-possessed,resource become more and more complex.Furthermore,the interaction and cooperation requirement between software units and running environment in service computing increase the complexity of software systems.The software systems with complex system characteristics are developing into the"Networked Software" with characteristics of change-on-demand and change-with-cooperation.The concepts "programming","compiling" and "running"of software in common sense are extended from "desktop" to "network".The core issue of software engineering is moving to the requirement engineering,which becomes the research focus of complex systemsoftware engineering.In this paper,we present the software network view based on complex system theory,and the concept of networked software and networked requirement.We proposethe challenge problem in the research of emergence computation of networked software requirement.A hierarchical & cooperative Unified requirement modeling framework URF (Unified Requirement Framework) and related RGPS (Role,Goal,Process and Service) meta-models are proposed.Five scales and the evolutionary growth mechanismin requirement emergence computation of networked software are given with focus on user-dominant and domain-oriented requirement,and the rules and predictability in requirement emergence computation are analyzed.A case study in the application of networked e-Business with evolutionary growth based on State design pattern is presented in the end.

  3. Evaluation of Network Reliability for Computer Networks with Multiple Sources

    Directory of Open Access Journals (Sweden)

    Yi-Kuei Lin

    2012-01-01

    Full Text Available Evaluating the reliability of a network with multiple sources to multiple sinks is a critical issue from the perspective of quality management. Due to the unrealistic definition of paths of network models in previous literature, existing models are not appropriate for real-world computer networks such as the Taiwan Advanced Research and Education Network (TWAREN. This paper proposes a modified stochastic-flow network model to evaluate the network reliability of a practical computer network with multiple sources where data is transmitted through several light paths (LPs. Network reliability is defined as being the probability of delivering a specified amount of data from the sources to the sink. It is taken as a performance index to measure the service level of TWAREN. This paper studies the network reliability of the international portion of TWAREN from two sources (Taipei and Hsinchu to one sink (New York that goes through a submarine and land surface cable between Taiwan and the United States.

  4. Professional networking using computer-mediated communication.

    Science.gov (United States)

    Washer, Peter

    Traditionally, professionals have networked with others in their field through attending conferences, professional organizations, direct mailing, and via the workplace. Recently, there have been new possibilities to network with other professionals using the internet. This article looks at the possibilities that the internet offers for professional networking, particularly e-mailing lists, newsgroups and membership databases, and compares them against more traditional methods of professional networking. The different types of computer-mediated communication are discussed and their relative merits and disadvantages are examined. The benefits and potential pitfalls of internet professional networking, as it relates to the nursing profession, are examined. Practical advice is offered on how the internet can be used as a means to foster professional networks of academic, clinical or research interests.

  5. Network coding for computing: Linear codes

    CERN Document Server

    Appuswamy, Rathinakumar; Karamchandani, Nikhil; Zeger, Kenneth

    2011-01-01

    In network coding it is known that linear codes are sufficient to achieve the coding capacity in multicast networks and that they are not sufficient in general to achieve the coding capacity in non-multicast networks. In network computing, Rai, Dey, and Shenvi have recently shown that linear codes are not sufficient in general for solvability of multi-receiver networks with scalar linear target functions. We study single receiver networks where the receiver node demands a target function of the source messages. We show that linear codes may provide a computing capacity advantage over routing only when the receiver demands a `linearly-reducible' target function. % Many known target functions including the arithmetic sum, minimum, and maximum are not linearly-reducible. Thus, the use of non-linear codes is essential in order to obtain a computing capacity advantage over routing if the receiver demands a target function that is not linearly-reducible. We also show that if a target function is linearly-reducible,...

  6. International Symposium on Computing and Network Sustainability

    CERN Document Server

    Akashe, Shyam

    2017-01-01

    The book is compilation of technical papers presented at International Research Symposium on Computing and Network Sustainability (IRSCNS 2016) held in Goa, India on 1st and 2nd July 2016. The areas covered in the book are sustainable computing and security, sustainable systems and technologies, sustainable methodologies and applications, sustainable networks applications and solutions, user-centered services and systems and mobile data management. The novel and recent technologies presented in the book are going to be helpful for researchers and industries in their advanced works.

  7. On computer vision in wireless sensor networks.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Ko, Teresa H.

    2004-09-01

    Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an image capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.

  8. Computation, cryptography, and network security

    CERN Document Server

    Rassias, Michael

    2015-01-01

    Analysis, assessment, and data management are core competencies for operation research analysts. This volume addresses a number of issues and developed methods for improving those skills. It is an outgrowth of a conference held in April 2013 at the Hellenic Military Academy, and brings together a broad variety of mathematical methods and theories with several applications. It discusses directions and pursuits of scientists that pertain to engineering sciences. It is also presents the theoretical background required for algorithms and techniques applied to a large variety of concrete problems. A number of open questions as well as new future areas are also highlighted.   This book will appeal to operations research analysts, engineers, community decision makers, academics, the military community, practitioners sharing the current “state-of-the-art,” and analysts from coalition partners. Topics covered include Operations Research, Games and Control Theory, Computational Number Theory and Information Securi...

  9. Traffic Dynamics of Computer Networks

    CERN Document Server

    Fekete, Attila

    2008-01-01

    Two important aspects of the Internet, namely the properties of its topology and the characteristics of its data traffic, have attracted growing attention of the physics community. My thesis has considered problems of both aspects. First I studied the stochastic behavior of TCP, the primary algorithm governing traffic in the current Internet, in an elementary network scenario consisting of a standalone infinite-sized buffer and an access link. The effect of the fast recovery and fast retransmission (FR/FR) algorithms is also considered. I showed that my model can be extended further to involve the effect of link propagation delay, characteristic of WAN. I continued my thesis with the investigation of finite-sized semi-bottleneck buffers, where packets can be dropped not only at the link, but also at the buffer. I demonstrated that the behavior of the system depends only on a certain combination of the parameters. Moreover, an analytic formula was derived that gives the ratio of packet loss rate at the buffer ...

  10. Effect of Maintenance on Computer Network Reliability

    Directory of Open Access Journals (Sweden)

    Rima Oudjedi Damerdji

    2014-08-01

    Full Text Available At the time of the new information technologies, computer networks are inescapable in any large organization, where they are organized so as to form powerful internal means of communication. In a context of dependability, the reliability parameter proves to be fundamental to evaluate the performances of such systems. In this paper, we study the reliability evaluation of a real computer network, through three reliability models. The computer network considered (set of PCs and server interconnected is localized in a company established in the west of Algeria and dedicated to the production of ammonia and fertilizers. The result permits to compare between the three models to determine the most appropriate reliability model to the studied network, and thus, contribute to improving the quality of the network. In order to anticipate system failures as well as improve the reliability and availability of the latter, we must put in place a policy of adequate and effective maintenance based on a new model of the most common competing risks in maintenance, Alert-Delay model. At the end, dependability measures such as MTBF and reliability are calculated to assess the effectiveness of maintenance strategies and thus, validate the alert delay model.

  11. Student Motivation in Computer Networking Courses

    Directory of Open Access Journals (Sweden)

    Wen-Jung Hsin, PhD

    2007-08-01

    Full Text Available This paper introduces several hands-on projects that have been used to motivate students in learning various computer networking concepts. These projects are shown to be very useful and applicable to the learners’ daily tasks and activities such as emailing, Web browsing, and online shopping and banking, and lead to an unexpected byproduct, self-motivation.

  12. Student Motivation in Computer Networking Courses

    Directory of Open Access Journals (Sweden)

    Wen-Jung Hsin

    2007-01-01

    Full Text Available This paper introduces several hands-on projects that have been used to motivate students in learning various computer networking concepts. These projects are shown to be very useful and applicable to the learners’ daily tasks and activities such as emailing, Web browsing, and online shopping and banking, and lead to an unexpected byproduct, self-motivation.

  13. Student Motivation in Computer Networking Courses

    Science.gov (United States)

    Hsin, Wen-Jung

    2007-01-01

    This paper introduces several hands-on projects that have been used to motivate students in learning various computer networking concepts. These projects are shown to be very useful and applicable to the learners' daily tasks and activities such as emailing, Web browsing, and online shopping and banking, and lead to an unexpected byproduct,…

  14. Non-harmful insertion of data mimicking computer network attacks

    Science.gov (United States)

    Neil, Joshua Charles; Kent, Alexander; Hash, Jr, Curtis Lee

    2016-06-21

    Non-harmful data mimicking computer network attacks may be inserted in a computer network. Anomalous real network connections may be generated between a plurality of computing systems in the network. Data mimicking an attack may also be generated. The generated data may be transmitted between the plurality of computing systems using the real network connections and measured to determine whether an attack is detected.

  15. Advanced Scientific Computing Research Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  16. Fuzzy logic, neural networks, and soft computing

    Science.gov (United States)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial

  17. Fuzzy logic, neural networks, and soft computing

    Science.gov (United States)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial

  18. Spiking network simulation code for petascale computers

    Science.gov (United States)

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  19. Spiking network simulation code for petascale computers

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2014-10-01

    Full Text Available Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.

  20. International Symposium on Complex Computing-Networks

    CERN Document Server

    Sevgi, L; CCN2005; Complex computing networks: Brain-like and wave-oriented electrodynamic algorithms

    2006-01-01

    This book uniquely combines new advances in the electromagnetic and the circuits&systems theory. It integrates both fields regarding computational aspects of common interest. Emphasized subjects are those methods which mimic brain-like and electrodynamic behaviour; among these are cellular neural networks, chaos and chaotic dynamics, attractor-based computation and stream ciphers. The book contains carefully selected contributions from the Symposium CCN2005. Pictures from the bestowal of Honorary Doctorate degrees to Leon O. Chua and Leopold B. Felsen are included.

  1. Quantum computation over the butterfly network

    CERN Document Server

    Kinjo, Yoshiyuki; Soeda, Akihito; Turner, Peter S

    2010-01-01

    In order to investigate distributed quantum computation under restricted network resources, we introduce a quantum computation task over the butterfly network where both quantum and classical communications are limited. We consider performing a two qubit global unitary operation on two unknown inputs given at different nodes, with outputs at two distinct nodes. By using a particular resource scenario introduced by Hayashi, which is capable of performing a swap operation by adding two maximally entangled qubits (ebits) between the two input nodes, we show that any controlled unitary operation can be performed without adding any entanglement resource. We also construct protocols for performing controlled traceless unitary operations with a 1-ebit resource and for performing global Clifford operations with a 2-ebit resource.

  2. Computational Methods for Modification of Metabolic Networks

    Directory of Open Access Journals (Sweden)

    Takeyuki Tamura

    2015-01-01

    Full Text Available In metabolic engineering, modification of metabolic networks is an important biotechnology and a challenging computational task. In the metabolic network modification, we should modify metabolic networks by newly adding enzymes or/and knocking-out genes to maximize the biomass production with minimum side-effect. In this mini-review, we briefly review constraint-based formalizations for Minimum Reaction Cut (MRC problem where the minimum set of reactions is deleted so that the target compound becomes non-producible from the view point of the flux balance analysis (FBA, elementary mode (EM, and Boolean models. Minimum Reaction Insertion (MRI problem where the minimum set of reactions is added so that the target compound newly becomes producible is also explained with a similar formalization approach. The relation between the accuracy of the models and the risk of overfitting is also discussed.

  3. Computer networks. Citations from the NTIS data base

    Science.gov (United States)

    Jones, J. E.

    1980-08-01

    Research reports on aspects of computer networks, including hardware, software, data transmission, time sharing, and applicable theory to network design are cited. Specific studies on the ARPA networks, and other such systems are listed.

  4. The research of computer network security and protection strategy

    Science.gov (United States)

    He, Jian

    2017-05-01

    With the widespread popularity of computer network applications, its security is also received a high degree of attention. Factors affecting the safety of network is complex, for to do a good job of network security is a systematic work, has the high challenge. For safety and reliability problems of computer network system, this paper combined with practical work experience, from the threat of network security, security technology, network some Suggestions and measures for the system design principle, in order to make the masses of users in computer networks to enhance safety awareness and master certain network security technology.

  5. Computer network defense through radial wave functions

    Science.gov (United States)

    Malloy, Ian J.

    The purpose of this research is to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has been devastating to geopolitical regions in that they are severely difficult for a civilian to avoid triggering given the unknown position of a landmine. Thus, the importance of understanding a logic bomb is relevant and has corollaries to quantum mechanics as well. The research synthesizes quantum logic phase shifts in certain respects using the Dynamic Data Exchange protocol in software written for this work, as well as a C-NOT gate applied to a virtual quantum circuit environment by implementing a Quantum Fourier Transform. The research focus applies the principles of coherence and entanglement from quantum physics, the concept of expert systems in artificial intelligence, principles of prime number based cryptography with trapdoor functions, and modeling radio wave propagation against an event from unknown parameters. This comes as a program relying on the artificial intelligence concept of an expert system in conjunction with trigger events for a trapdoor function relying on infinite recursion, as well as system mechanics for elliptic curve cryptography along orbital angular momenta. Here trapdoor both denotes the form of cipher, as well as the implied relationship to logic bombs.

  6. Nanoarchitectonic atomic switch networks for unconventional computing

    Science.gov (United States)

    Demis, Eleanor C.; Aguilera, Renato; Scharnhorst, Kelsey; Aono, Masakazu; Stieg, Adam Z.; Gimzewski, James K.

    2016-11-01

    Developments in computing hardware are constrained by the operating principles of complementary metal oxide semiconductor (CMOS) technology, fabrication limits of nanometer scaled features, and difficulties in effective utilization of high density interconnects. This set of obstacles has promulgated a search for alternative, energy efficient approaches to computing inspired by natural systems including the mammalian brain. Atomic switch network (ASN) devices are a unique platform specifically developed to overcome these current barriers to realize adaptive neuromorphic technology. ASNs are composed of a massively interconnected network of atomic switches with a density of ∼109 units/cm2 and are structurally reminiscent of the neocortex of the brain. ASNs possess both the intrinsic capabilities of individual memristive switches, such as memory capacity and multi-state switching, and the characteristics of large-scale complex systems, such as power-law dynamics and non-linear transformations of input signals. Here we describe the successful nanoarchitectonic fabrication of next-generation ASN devices using combined top-down and bottom-up processing and experimentally demonstrate their utility as reservoir computing hardware. Leveraging their intrinsic dynamics and transformative input/output (I/O) behavior enabled waveform regression of periodic signals in the absence of embedded algorithms, further supporting the potential utility of ASN technology as a platform for unconventional approaches to computing.

  7. Computational fact checking from knowledge networks

    CERN Document Server

    Ciampaglia, Giovanni Luca; Rocha, Luis M; Bollen, Johan; Menczer, Filippo; Flammini, Alessandro

    2015-01-01

    Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation...

  8. Computer network security and cyber ethics

    CERN Document Server

    Kizza, Joseph Migga

    2014-01-01

    In its 4th edition, this book remains focused on increasing public awareness of the nature and motives of cyber vandalism and cybercriminals, the weaknesses inherent in cyberspace infrastructure, and the means available to protect ourselves and our society. This new edition aims to integrate security education and awareness with discussions of morality and ethics. The reader will gain an understanding of how the security of information in general and of computer networks in particular, on which our national critical infrastructure and, indeed, our lives depend, is based squarely on the individ

  9. Social sciences via network analysis and computation

    CERN Document Server

    Kanduc, Tadej

    2015-01-01

    In recent years information and communication technologies have gained significant importance in the social sciences. Because there is such rapid growth of knowledge, methods and computer infrastructure, research can now seamlessly connect interdisciplinary fields such as business process management, data processing and mathematics. This study presents some of the latest results, practices and state-of-the-art approaches in network analysis, machine learning, data mining, data clustering and classifications in the contents of social sciences. It also covers various real-life examples such as t

  10. WEB BASED LEARNING OF COMPUTER NETWORK COURSE

    Directory of Open Access Journals (Sweden)

    Hakan KAPTAN

    2004-04-01

    Full Text Available As a result of developing on Internet and computer fields, web based education becomes one of the area that many improving and research studies are done. In this study, web based education materials have been explained for multimedia animation and simulation aided Computer Networks course in Technical Education Faculties. Course content is formed by use of university course books, web based education materials and technology web pages of companies. Course content is formed by texts, pictures and figures to increase motivation of students and facilities of learning some topics are supported by animations. Furthermore to help working principles of routing algorithms and congestion control algorithms simulators are constructed in order to interactive learning

  11. Fast Distributed Computation of Distances in Networks

    CERN Document Server

    Almeida, Paulo Sérgio; Cunha, Alcino

    2011-01-01

    This paper presents a distributed algorithm to simultaneously compute the diameter, radius and node eccentricity in all nodes of a synchronous network. Such topological information may be useful as input to configure other algorithms. Previous approaches have been modular, progressing in sequential phases using building blocks such as BFS tree construction, thus incurring longer executions than strictly required. We present an algorithm that, by timely propagation of available estimations, achieves a faster convergence to the correct values. We show local criteria for detecting convergence in each node. The algorithm avoids the creation of BFS trees and simply manipulates sets of node ids and hop counts. For the worst scenario of variable start times, each node i with eccentricity ecc(i) can compute: the node eccentricity in diam(G)+ecc(i)+2 rounds; the diameter in 2*diam(G)+ecc(i)+2 rounds; and the radius in diam(G)+ecc(i)+2*radius(G) rounds.

  12. A complex network approach to cloud computing

    CERN Document Server

    Travieso, Gonzalo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2015-01-01

    Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the users' tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlain by Erdos-Renyi and Barabasi-Albert topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of two indices: the cost of communication between the user and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter index, the ER topology provides better performance than the BA case for smaller average degrees and opposite behavior for larger average degrees. With respect to the cost, smaller values are found in the BA ...

  13. A Packet Routing Model for Computer Networks

    Directory of Open Access Journals (Sweden)

    O. Osunade

    2012-05-01

    Full Text Available The quest for reliable data transmission in today’s computer networks and internetworks forms the basis for which routing schemes need be improved upon. The persistent increase in the size of internetwork leads to a dwindling performance of the present routing algorithms which are meant to provide optimal path for forwarding packets from one network to the other. A mathematical and analytical routing model framework is proposed to address the routing needs to a substantial extent. The model provides schemes typical of packet sources, queuing system within a buffer, links and bandwidth allocation and time-based bandwidth generator in routing chunks of packets to their destinations. Principal to the choice of link are such design considerations as least-congested link in a set of links, normalized throughput, mean delay and mean waiting time and the priority of packets in a set of prioritized packets. These performance metrics were targeted and the resultant outcome is a fair, load-balanced network.

  14. Visualization techniques for computer network defense

    Science.gov (United States)

    Beaver, Justin M.; Steed, Chad A.; Patton, Robert M.; Cui, Xiaohui; Schultz, Matthew

    2011-06-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.

  15. Visualization Techniques for Computer Network Defense

    Energy Technology Data Exchange (ETDEWEB)

    Beaver, Justin M [ORNL; Steed, Chad A [ORNL; Patton, Robert M [ORNL; Cui, Xiaohui [ORNL; Schultz, Matthew A [ORNL

    2011-01-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.

  16. Some Issues on Computer Networks: Architecture and Key Technologies

    Institute of Scientific and Technical Information of China (English)

    Guan-Qun Gu; Jun-Zhou Luo

    2006-01-01

    The evolution of computer networks has experienced several major steps, and research focus of each step has been kept changing and evolving, from ARPANET to OSI/RM, then HSN (high speed network) and HPN (high performance network). During the evolution, computer networks represented by Internet have made great progress and gained unprecedented success. However, with the appearance and intensification of tussle, along with the three difficult problems (service customizing, resource control and user management) of modern network, it is found that traditional Internet and its architecture no longer meet the requirements of next generation network. Therefore, it is the next generation network that current Internet must evolve to. With the mindset of achieving valuable guidance for research on next generation network, this paper firstly analyzes some dilemmas facing current Internet and its architecture, and then surveys some recent influential research work and progresses in computer networks and related areas, including new generation network architecture, network resource control technologies, network management and security, distributed computing and middleware,wireless/mobile network, new generation network services and applications, and foundational theories on network modeling.Finally, this paper concludes that within the research on next generation network, more attention should be paid to the high availability network and corresponding architecture, key theories and supporting technologies.

  17. Chemical Reaction Networks for Computing Polynomials.

    Science.gov (United States)

    Salehi, Sayed Ahmad; Parhi, Keshab K; Riedel, Marc D

    2017-01-20

    Chemical reaction networks (CRNs) provide a fundamental model in the study of molecular systems. Widely used as formalism for the analysis of chemical and biochemical systems, CRNs have received renewed attention as a model for molecular computation. This paper demonstrates that, with a new encoding, CRNs can compute any set of polynomial functions subject only to the limitation that these functions must map the unit interval to itself. These polynomials can be expressed as linear combinations of Bernstein basis polynomials with positive coefficients less than or equal to 1. In the proposed encoding approach, each variable is represented using two molecular types: a type-0 and a type-1. The value is the ratio of the concentration of type-1 molecules to the sum of the concentrations of type-0 and type-1 molecules. The proposed encoding naturally exploits the expansion of a power-form polynomial into a Bernstein polynomial. Molecular encoders for converting any input in a standard representation to the fractional representation as well as decoders for converting the computed output from the fractional to a standard representation are presented. The method is illustrated first for generic CRNs; then chemical reactions designed for an example are mapped to DNA strand-displacement reactions.

  18. The Service Concept Applied to Computer Networks. Technical Note 880.

    Science.gov (United States)

    Abrams, Marshall D.; Cotton, Ira W.

    The Network Measurement System (NMS) represents the implementation of a new approach to the performance measurement and evaluation of computer network systems and services. By focusing on the service delivered to network customers at their terminals, rather than on the internal mechanics of network operation, measurements can be obtained which are…

  19. Computation of loss allocation in electric power networks using loss ...

    African Journals Online (AJOL)

    Computation of loss allocation in electric power networks using loss vector. ... The losses to be allocated are derived from load flow of a specified power network and operating conditions. Loss vectors associated with demand ... Article Metrics.

  20. Network and computing infrastructure for scientific applications in Georgia

    Science.gov (United States)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  1. 2013 International Conference on Computer Engineering and Network

    CERN Document Server

    Zhu, Tingshao

    2014-01-01

    This book aims to examine innovation in the fields of computer engineering and networking. The book covers important emerging topics in computer engineering and networking, and it will help researchers and engineers improve their knowledge of state-of-art in related areas. The book presents papers from The Proceedings of the 2013 International Conference on Computer Engineering and Network (CENet2013) which was held on July 20-21, in Shanghai, China.

  2. Mechanisms of protection of information in computer networks and systems

    Directory of Open Access Journals (Sweden)

    Sergey Petrovich Evseev

    2011-10-01

    Full Text Available Protocols of information protection in computer networks and systems are investigated. The basic types of threats of infringement of the protection arising from the use of computer networks are classified. The basic mechanisms, services and variants of realization of cryptosystems for maintaining authentication, integrity and confidentiality of transmitted information are examined. Their advantages and drawbacks are described. Perspective directions of development of cryptographic transformations for the maintenance of information protection in computer networks and systems are defined and analyzed.

  3. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  4. Classification and Analysis of Computer Network Traffic

    DEFF Research Database (Denmark)

    Bujlow, Tomasz

    2014-01-01

    various classification modes (decision trees, rulesets, boosting, softening thresholds) regarding the classification accuracy and the time required to create the classifier. We showed how to use our VBS tool to obtain per-flow, per-application, and per-content statistics of traffic in computer networks...... classification (as by using transport layer port numbers, Deep Packet Inspection (DPI), statistical classification) and assessed their usefulness in particular areas. We found that the classification techniques based on port numbers are not accurate anymore as most applications use dynamic port numbers, while...... DPI is relatively slow, requires a lot of processing power, and causes a lot of privacy concerns. Statistical classifiers based on Machine Learning Algorithms (MLAs) were shown to be fast and accurate. At the same time, they do not consume a lot of resources and do not cause privacy concerns. However...

  5. AUTOMATIC CONTROL OF INTELLECTUAL RIGHTS IN THE GLOBAL COMPUTER NETWORKS

    Directory of Open Access Journals (Sweden)

    Anatoly P. Yakimaho

    2013-01-01

    Full Text Available The problems of use of subjects of intellectual property in the global computer networks are stated. The main attention is focused on the ways of problems solutions arising during the work in computer networks. Legal problems of information society are considered. The analysis of global computer networks as places for the organization of collective management by copyrights in the world scale is carried out. Issues of creation of a system of automatic control of property rights of authors and owners in the global computer networks are taken up.

  6. Email networks and the spread of computer viruses

    Science.gov (United States)

    Newman, M. E.; Forrest, Stephanie; Balthrop, Justin

    2002-09-01

    Many computer viruses spread via electronic mail, making use of computer users' email address books as a source for email addresses of new victims. These address books form a directed social network of connections between individuals over which the virus spreads. Here we investigate empirically the structure of this network using data drawn from a large computer installation, and discuss the implications of this structure for the understanding and prevention of computer virus epidemics.

  7. An Overview of Computer Network security and Research Technology

    OpenAIRE

    Rathore, Vandana

    2016-01-01

    The rapid development in the field of computer networks and systems brings both convenience and security threats for users. Security threats include network security and data security. Network security refers to the reliability, confidentiality, integrity and availability of the information in the system. The main objective of network security is to maintain the authenticity, integrity, confidentiality, availability of the network. This paper introduces the details of the technologies used in...

  8. Cloud Computing for Network Security Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Jin Yang

    2013-01-01

    Full Text Available In recent years, as a new distributed computing model, cloud computing has developed rapidly and become the focus of academia and industry. But now the security issue of cloud computing is a main critical problem of most enterprise customers faced. In the current network environment, that relying on a single terminal to check the Trojan virus is considered increasingly unreliable. This paper analyzes the characteristics of current cloud computing, and then proposes a comprehensive real-time network risk evaluation model for cloud computing based on the correspondence between the artificial immune system antibody and pathogen invasion intensity. The paper also combines assets evaluation system and network integration evaluation system, considering from the application layer, the host layer, network layer may be factors that affect the network risks. The experimental results show that this model improves the ability of intrusion detection and can support for the security of current cloud computing.

  9. Computer Network Topology Design in Limelight of Pascal Graph Property

    CERN Document Server

    Pal, Sanjay Kumar; 10.5121/ijngn.2010.2103

    2010-01-01

    Constantly growing demands of high productivity and security of computer systems and computer networks call the interest of specialists in the environment of construction of optimum topologies of computer mediums. In earliest phases of design, the study of the topological influence of the processes that happen in computer systems and computer networks allows to obtain useful information which possesses a significant value in the subsequent design. It has always been tried to represent the different computer network topologies using appropriate graph models. Graphs have huge contributions towards the performance improvement factor of a network. Some major contributors are de-Bruijn, Hypercube, Mesh and Pascal. They had been studied a lot and different new features were always a part of research outcome. As per the definition of interconnection network it is equivalent that a suitable graph can represent the physical and logical layout very efficiently. In this present study Pascal graph is researched again and...

  10. Computational intelligence synergies of fuzzy logic, neural networks and evolutionary computing

    CERN Document Server

    Siddique, Nazmul

    2013-01-01

    Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing presents an introduction to some of the cutting edge technological paradigms under the umbrella of computational intelligence. Computational intelligence schemes are investigated with the development of a suitable framework for fuzzy logic, neural networks and evolutionary computing, neuro-fuzzy systems, evolutionary-fuzzy systems and evolutionary neural systems. Applications to linear and non-linear systems are discussed with examples. Key features: Covers all the aspect

  11. Ethical Considerations of Computer Network Attack in Information Warfare

    Science.gov (United States)

    2001-01-16

    attack/destruction, and special information operations (SIO). CNA and the other methods of offensive IO represent the incorporation of information...psychological operations, electronic warfare, physical attack and/or destruction, and special information operations, and could include computer network...to computer networks to record information sent over them. 41 special information operations. Information operations that by their sensitive nature

  12. A simulation model of a star computer network

    CERN Document Server

    Gomaa, H

    1979-01-01

    A simulation model of the CERN (European Organization for Nuclear Research) SPS star computer network is described. The model concentrates on simulating the message handling computer, through which all messages in the network pass. The implementation of the model and its calibration are also described. (6 refs).

  13. Guest Editorial: Special Issue on Wireless Mobile Computing and Networking

    Institute of Scientific and Technical Information of China (English)

    Yu Wang; Yanwei Wu; Fan Li; Bin Xu; Teresa Dahlberg

    2011-01-01

    Recent convergence of information communications technology and computing is creating new demands and opportunities for ubiquitous computing via wireless and mobile equipments.The demanding networking environment of wireless communications and the fast-growing number of mobile users impose several challenges in terms of channel estimation,network protocol design,resource management,systematic design,application development,and security.The objective of this special issue is to gather recent advances addressing networks,systems,algorithms,and applications that support the symbiosis of mobile computers and wireless networks.

  14. Grid Computing based on Game Optimization Theory for Networks Scheduling

    Directory of Open Access Journals (Sweden)

    Peng-fei Zhang

    2014-05-01

    Full Text Available The resource sharing mechanism is introduced into grid computing algorithm so as to solve complex computational tasks in heterogeneous network-computing problem. However, in the Grid environment, it is required for the available resource from network to reasonably schedule and coordinate, which can get a good workflow and an appropriate network performance and network response time. In order to improve the performance of resource allocation and task scheduling in grid computing method, a game model based on non-cooperation game is proposed. Setting the time and cost of user’s resource allocation can increase the performance of networks, and incentive resource of networks uses an optimization scheduling algorithm, which minimizes the time and cost of resource scheduling. Simulation experiment results show the feasibility and suitability of model. In addition, we can see from the experiment result that model-based genetic algorithm is the best resource scheduling algorithm

  15. 4th International Conference on Computer Engineering and Networks

    CERN Document Server

    2015-01-01

    This book aims to examine innovation in the fields of computer engineering and networking. The book covers important emerging topics in computer engineering and networking, and it will help researchers and engineers improve their knowledge of state-of-art in related areas. The book presents papers from the 4th International Conference on Computer Engineering and Networks (CENet2014) held July 19-20, 2014 in Shanghai, China.  ·       Covers emerging topics for computer engineering and networking ·       Discusses how to improve productivity by using the latest advanced technologies ·       Examines innovation in the fields of computer engineering and networking  

  16. Security of fixed and wireless computer networks

    NARCIS (Netherlands)

    Verschuren, J.; Degen, A.J.G.; Veugen, P.J.M.

    2003-01-01

    A few decades ago, most computers were stand-alone machines: they were able to process information using their own resources. Later, computer systems were connected to each other enabling a computer system to exchange data with another computer and to use resources of another computer. With the coup

  17. Security of fixed and wireless computer networks

    NARCIS (Netherlands)

    Verschuren, J.; Degen, A.J.G.; Veugen, P.J.M.

    2003-01-01

    A few decades ago, most computers were stand-alone machines: they were able to process information using their own resources. Later, computer systems were connected to each other enabling a computer system to exchange data with another computer and to use resources of another computer. With the coup

  18. Artificial Neural Network Metamodels of Stochastic Computer Simulations

    Science.gov (United States)

    1994-08-10

    SUBTITLE r 5. FUNDING NUMBERS Artificial Neural Network Metamodels of Stochastic I () Computer Simulations 6. AUTHOR(S) AD- A285 951 Robert Allen...8217!298*1C2 ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC COMPUTER SIMULATIONS by Robert Allen Kilmer B.S. in Education Mathematics, Indiana...dedicate this document to the memory of my father, William Ralph Kilmer. mi ABSTRACT Signature ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC

  19. Second International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Konar, Amit; Chakraborty, Aruna

    2014-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two-volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 148 scholarly papers, which have been accepted for presentation from over 640 submissions in the second International Conference on Advanced Computing, Networking and Informatics, 2014, held in Kolkata, India during June 24-26, 2014. The first volume includes innovative computing techniques and relevant research results in informatics with selective applications in pattern recognition, signal/image process...

  20. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  1. A Computer Network for Social Scientists.

    Science.gov (United States)

    Gerber, Barry

    1989-01-01

    Describes a microcomputer-based network developed at the University of California Los Angeles to support education in the social sciences. Topics discussed include technological, managerial, and academic considerations of university networking; the use of the network in teaching macroeconomics, social demographics, and symbolic logic; and possible…

  2. LaRC local area networks to support distributed computing

    Science.gov (United States)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  3. Network as a computer: ranking paths to find flows

    CERN Document Server

    Pavlovic, Dusko

    2008-01-01

    We explore a simple mathematical model of network computation, based on Markov chains. Similar models apply to a broad range of computational phenomena, arising in networks of computers, as well as in genetic, and neural nets, in social networks, and so on. The main problem of interaction with such spontaneously evolving computational systems is that the data are not uniformly structured. An interesting approach is to try to extract the semantical content of the data from their distribution among the nodes. A concept is then identified by finding the community of nodes that share it. The task of data structuring is thus reduced to the task of finding the network communities, as groups of nodes that together perform some non-local data processing. Towards this goal, we extend the ranking methods from nodes to paths. This allows us to extract some information about the likely flow biases from the available static information about the network.

  4. Network selection, Information filtering and Scalable computation

    Science.gov (United States)

    Ye, Changqing

    This dissertation explores two application scenarios of sparsity pursuit method on large scale data sets. The first scenario is classification and regression in analyzing high dimensional structured data, where predictors corresponds to nodes of a given directed graph. This arises in, for instance, identification of disease genes for the Parkinson's diseases from a network of candidate genes. In such a situation, directed graph describes dependencies among the genes, where direction of edges represent certain causal effects. Key to high-dimensional structured classification and regression is how to utilize dependencies among predictors as specified by directions of the graph. In this dissertation, we develop a novel method that fully takes into account such dependencies formulated through certain nonlinear constraints. We apply the proposed method to two applications, feature selection in large margin binary classification and in linear regression. We implement the proposed method through difference convex programming for the cost function and constraints. Finally, theoretical and numerical analyses suggest that the proposed method achieves the desired objectives. An application to disease gene identification is presented. The second application scenario is personalized information filtering which extracts the information specifically relevant to a user, predicting his/her preference over a large number of items, based on the opinions of users who think alike or its content. This problem is cast into the framework of regression and classification, where we introduce novel partial latent models to integrate additional user-specific and content-specific predictors, for higher predictive accuracy. In particular, we factorize a user-over-item preference matrix into a product of two matrices, each representing a user's preference and an item preference by users. Then we propose a likelihood method to seek a sparsest latent factorization, from a class of over

  5. Computational capacity and energy consumption of complex resistive switch networks

    Directory of Open Access Journals (Sweden)

    Jens Bürger

    2015-12-01

    Full Text Available Resistive switches are a class of emerging nanoelectronics devices that exhibit a wide variety of switching characteristics closely resembling behaviors of biological synapses. Assembled into random networks, such resistive switches produce emerging behaviors far more complex than that of individual devices. This was previously demonstrated in simulations that exploit information processing within these random networks to solve tasks that require nonlinear computation as well as memory. Physical assemblies of such networks manifest complex spatial structures and basic processing capabilities often related to biologically-inspired computing. We model and simulate random resistive switch networks and analyze their computational capacities. We provide a detailed discussion of the relevant design parameters and establish the link to the physical assemblies by relating the modeling parameters to physical parameters. More globally connected networks and an increased network switching activity are means to increase the computational capacity linearly at the expense of exponentially growing energy consumption. We discuss a new modular approach that exhibits higher computational capacities, and energy consumption growing linearly with the number of networks used. The results show how to optimize the trade-o between computational capacity and energy e ciency and are relevant for the design and fabrication of novel computing architectures that harness random assemblies of emerging nanodevices.

  6. Costs evaluation methodic of energy efficient computer network reengineering

    Directory of Open Access Journals (Sweden)

    S.A. Nesterenko

    2016-09-01

    Full Text Available A key direction of modern computer networks reengineering is their transfer to a new energy-saving technology IEEE 802.3az. To make a reasoned decision about the transition to the new technology is needed a technique that allows network engineers to answer the question about the economic feasibility of a network upgrade. Aim: The aim of this research is development of methodic for calculating the cost-effectiveness of energy-efficient computer network reengineering. Materials and Methods: The methodic uses analytical models for calculating power consumption of a computer network port operating in IEEE 802.3 standard and energy-efficient mode of IEEE 802.3az standard. For frame transmission time calculation in the communication channel used the queuing model. To determine the values of the network operation parameters proposed to use multiagent network monitoring method. Results: The methodic allows calculating the economic impact of a computer network transfer to energy-saving technology IEEE 802.3az. To determine the network performance parameters proposed to use network SNMP monitoring systems based on RMON MIB agents.

  7. The computational power of interactive recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2012-04-01

    In classical computation, rational- and real-weighted recurrent neural networks were shown to be respectively equivalent to and strictly more powerful than the standard Turing machine model. Here, we study the computational power of recurrent neural networks in a more biologically oriented computational framework, capturing the aspects of sequential interactivity and persistence of memory. In this context, we prove that so-called interactive rational- and real-weighted neural networks show the same computational powers as interactive Turing machines and interactive Turing machines with advice, respectively. A mathematical characterization of each of these computational powers is also provided. It follows from these results that interactive real-weighted neural networks can perform uncountably many more translations of information than interactive Turing machines, making them capable of super-Turing capabilities.

  8. 3rd International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Chaki, Nabendu

    2016-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 132 scholarly articles, which have been accepted for presentation from over 550 submissions in the Third International Conference on Advanced Computing, Networking and Informatics, 2015, held in Bhubaneswar, India during June 23–25, 2015.

  9. HeNCE: A Heterogeneous Network Computing Environment

    Directory of Open Access Journals (Sweden)

    Adam Beguelin

    1994-01-01

    Full Text Available Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM. The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.

  10. Fault Detection of Computer Communication Networks Using an Expert System

    Directory of Open Access Journals (Sweden)

    Ibrahiem M.M. El Emary

    2005-01-01

    Full Text Available The main objective of this study was to build an expert system for assisting the network administrator in his work of management and administration of the computer communication network. Theory of operation of the proposed expert system depends on using a time series model capable of forecasting the various performance parameters as: delay, utilization and collision frequency. When the expert system finds a difference (with certain tolerance between the predicted value and the measured value, it informs the network administrator that there exist problems in his network either in the switch or link or router. We examine two types of network by our proposed expert system, the first one is called token bus while the second one is called token ring. When we run our expert system on these two types of computer networks, actually the expert system captures the problem when there exists an excess deviation from the network performance parameters.

  11. The one-way quantum computer - a non-network model of quantum computation

    CERN Document Server

    Raussendorf, R; Briegel, H J; Raussendorf, Robert; Browne, Daniel E.; Briegel, Hans J.

    2001-01-01

    A one-way quantum computer works by only performing a sequence of one-qubit measurements on a particular entangled multi-qubit state, the cluster state. No non-local operations are required in the process of computation. Any quantum logic network can be simulated on the one-way quantum computer. On the other hand, the network model of quantum computation cannot explain all ways of processing quantum information possible with the one-way quantum computer. In this paper, two examples of the non-network character of the one-way quantum computer are given. First, circuits in the Clifford group can be performed in a single time step. Second, the realisation of a particular circuit --the bit-reversal gate-- on the one-way quantum computer has no network interpretation. (Submitted to J. Mod. Opt, Gdansk ESF QIT conference issue.)

  12. A programmable interface to neuromolecular computing networks.

    Science.gov (United States)

    Akingbehin, K

    1995-01-01

    A programmable interface is provided to a simulated network of reaction-diffusion neurons. The interface allows special 'learn' and 'decide' syntactic constructs to be intermixed with conventional programming constructs. This hybrid combination allows the power of programmability to be combined with the power of adaptability to provide innovative solutions to complex problems. The network uses reaction-diffusion neurons instead of adaline neurons. A mesh topology is used instead of a feedforward topology. The performance of the mesh reaction-diffusion network compares favorably with that of conventional feedforward adaline networks. Enhancements to incorporate short- and long-term memory are described.

  13. A New Method for Computing Attention Network Scores and Relationships between Attention Networks

    OpenAIRE

    Yi-Feng Wang; Qian Cui; Feng Liu; Ya-Jun Huo; Feng-Mei Lu; Heng Chen; Hua-Fu Chen

    2014-01-01

    The attention network test (ANT) is a reliable tool to detect the efficiency of alerting, orienting, and executive control networks. However, studies using the ANT obtained inconsistent relationships between attention networks due to two reasons: on the one hand, the inter-network relationships of attention subsystems were far from clear; on the other hand, ANT scores in previous studies were disturbed by possible inter-network interactions. Here we proposed a new computing method by dissecti...

  14. Wireless Networks: New Meaning to Ubiquitous Computing.

    Science.gov (United States)

    Drew, Wilfred, Jr.

    2003-01-01

    Discusses the use of wireless technology in academic libraries. Topics include wireless networks; standards (IEEE 802.11); wired versus wireless; why libraries implement wireless technology; wireless local area networks (WLANs); WLAN security; examples of wireless use at Indiana State University and Morrisville College (New York); and useful…

  15. Parallel CFD design on network-based computer

    Science.gov (United States)

    Cheung, Samson

    1995-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advanced computational fluid dynamics codes, which can be computationally expensive on mainframe supercomputers. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computing environment utilizing a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package is applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  16. A Brief Talk on Teaching Reform Program of Computer Network Course System about Computer Related Professional

    Institute of Scientific and Technical Information of China (English)

    Wang Jian-Ping; Huang Yong

    2008-01-01

    The computer network course is the mainstay required course that college computer-related professional sets up,in regard to current teaching condition analysis,the teaching of this course has not formed a complete system,the new knowledge points can be added in promptly while the outdated technology is still there in teaching The article describes the current situation and maladies which appears in the university computer network related professional teaching,the teaching systems and teaching reform schemes about the computer network coupe are presented.

  17. Phoebus: Network Middleware for Next-Generation Network Computing

    Energy Technology Data Exchange (ETDEWEB)

    Martin Swany

    2012-06-16

    The Phoebus project investigated algorithms, protocols, and middleware infrastructure to improve end-to-end performance in high speed, dynamic networks. The Phoebus system essentially serves as an adaptation point for networks with disparate capabilities or provisioning. This adaptation can take a variety of forms including acting as a provisioning agent across multiple signaling domains, providing transport protocol adaptation points, and mapping between distributed resource reservation paradigms and the optical network control plane. We have successfully developed the system and demonstrated benefits. The Phoebus system was deployed in Internet2 and in ESnet, as well as in GEANT2, RNP in Brazil and over international links to Korea and Japan. Phoebus is a system that implements a new protocol and associated forwarding infrastructure for improving throughput in high-speed dynamic networks. It was developed to serve the needs of large DOE applications on high-performance networks. The idea underlying the Phoebus model is to embed Phoebus Gateways (PGs) in the network as on-ramps to dynamic circuit networks. The gateways act as protocol translators that allow legacy applications to use dedicated paths with high performance.

  18. Global, Computer-generated Map of Valley Networks on Mars

    Science.gov (United States)

    Luo, W.; Stepinski, T. F.

    2009-03-01

    The new, global map of valley networks on Mars has been created entirely by a computer algorithm parsing topographic data. Dependencies between dissection density and its potential controlling factors are derived and discussed.

  19. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  20. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  1. Computer Network Security: Best Practices for Alberta School Jurisdictions.

    Science.gov (United States)

    Alberta Dept. of Education, Edmonton.

    This paper provides a snapshot of the computer network security industry and addresses specific issues related to network security in public education. The following topics are covered: (1) security policy, including reasons for establishing a policy, risk assessment, areas to consider, audit tools; (2) workstations, including physical security,…

  2. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  3. Virtual Network Computing Testbed for Cybersecurity Research

    Science.gov (United States)

    2015-08-17

    Standard Form 298 (Rev 8/98) Prescribed by ANSI Std. Z39.18 212-346-1012 W911NF-12-1-0393 61504-CS- RIP .2 Final Report a. REPORT 14. ABSTRACT 16...traffic on the network, either by using mathematical formulas or by replaying packet streams . As a result, simulators depend deeply on the assumptions...traffic, not simulated packet streams , and to enable real attacks to be launched. The need for realism eliminated network simulators from consideration

  4. Networked Computing in the 1990s.

    Science.gov (United States)

    Tesler, Lawrence G.

    1991-01-01

    The changes in the relationship between the computer and user from that of an isolated productivity tool to than of an active collaborator in the acquisition, use, and creation of information, as well as a facilitator of human interaction are discussed. The four paradigms of computing are compared. (KR)

  5. Cloud Computing in Mobile Communication Networks

    Institute of Scientific and Technical Information of China (English)

    Xinzhi Ouyang

    2011-01-01

    Cloud computing makes computing power universally available and provides flexibility in resource acquisition. It allows for scalable provision of services and more reasonable use of resources. This article considers cloud service deployment and virtualization from the perspective of mobile operators. A solution is proposed that allows mobile operators to maximize profits with minimal investment,

  6. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  7. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    that adopt different approaches to computing the query. Algorithm AUG uses graph augmentation, and ITE uses iterative road-network partitioning. Empirical studies with real data sets demonstrate that the algorithms are capable of offering high performance in realistic settings....... that are shown empirically to be scalable. Given a road network, a set of existing facilities, and a collection of customer route traversals, an optimal segment query returns the optimal road network segment(s) for a new facility. We propose a practical framework for computing this query, where each route...

  8. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    2013-01-01

    that adopt different approaches to computing the query. Algorithm AUG uses graph augmentation, and ITE uses iterative road-network partitioning. Empirical studies with real data sets demonstrate that the algorithms are capable of offering high performance in realistic settings....... that are shown empirically to be scalable. Given a road network, a set of existing facilities, and a collection of customer route traversals, an optimal segment query returns the optimal road network segment(s) for a new facility. We propose a practical framework for computing this query, where each route...

  9. An Experiment in Computer Conferencing Using a Local Area Network.

    Science.gov (United States)

    Baird, Patricia M.; Borer, Beatrice

    1987-01-01

    Describes various computer conferencing systems and discusses their effectiveness in terms of user acceptance and reactions to the technology. The methodology and findings of an experiment in which graduate students conducted a computer conference using a local area network and produced an electronic journal of the conference proceedings are…

  10. BECUN. The Educational Computer User's Network at Battelle.

    Science.gov (United States)

    Battelle Memorial Inst., Columbus, OH.

    The Educational Computer User's Network at Battelle Columbus Laboratories is a cooperative computer center effort between a group of Ohio colleges, secondary schools, and a large research-oriented organization. This description of the program includes the historical background, program concept, data processing development, hardware and software,…

  11. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  12. Active system area networks for data intensive computations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-04-01

    The goal of the Active System Area Networks (ASAN) project is to develop hardware and software technologies for the implementation of active system area networks (ASANs). The use of the term ''active'' refers to the ability of the network interfaces to perform application-specific as well as system level computations in addition to their traditional role of data transfer. This project adopts the view that the network infrastructure should be an active computational entity capable of supporting certain classes of computations that would otherwise be performed on the host CPUs. The result is a unique network-wide programming model where computations are dynamically placed within the host CPUs or the NIs depending upon the quality of service demands and network/CPU resource availability. The projects seeks to demonstrate that such an approach is a better match for data intensive network-based applications and that the advent of low-cost powerful embedded processors and configurable hardware makes such an approach economically viable and desirable.

  13. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  14. Recurrent kernel machines: computing with infinite echo state networks.

    Science.gov (United States)

    Hermans, Michiel; Schrauwen, Benjamin

    2012-01-01

    Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks.

  15. 1st International Conference on Signal, Networks, Computing, and Systems

    CERN Document Server

    Mohapatra, Durga; Nagar, Atulya; Sahoo, Manmath

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on Signal, Networks, Computing, and Systems (ICSNCS 2016) held at Jawaharlal Nehru University, New Delhi, India during February 25–27, 2016. The book is organized in to two volumes and primarily focuses on theory and applications in the broad areas of communication technology, computer science and information security. The book aims to bring together the latest scientific research works of academic scientists, professors, research scholars and students in the areas of signal, networks, computing and systems detailing the practical challenges encountered and the solutions adopted.

  16. Console Networks for Major Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ophir, D; Shepherd, B; Spinrad, R J; Stonehill, D

    1966-07-22

    A concept for interactive time-sharing of a major computer system is developed in which satellite computers mediate between the central computing complex and the various individual user terminals. These techniques allow the development of a satellite system substantially independent of the details of the central computer and its operating system. Although the user terminals' roles may be rich and varied, the demands on the central facility are merely those of a tape drive or similar batched information transfer device. The particular system under development provides service for eleven visual display and communication consoles, sixteen general purpose, low rate data sources, and up to thirty-one typewriters. Each visual display provides a flicker-free image of up to 4000 alphanumeric characters or tens of thousands of points by employing a swept raster picture generating technique directly compatible with that of commercial television. Users communicate either by typewriter or a manually positioned light pointer.

  17. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  18. CX: A Scalable, Robust Network for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Peter Cappello

    2002-01-01

    Full Text Available CX, a network-based computational exchange, is presented. The system's design integrates variations of ideas from other researchers, such as work stealing, non-blocking tasks, eager scheduling, and space-based coordination. The object-oriented API is simple, compact, and cleanly separates application logic from the logic that supports interprocess communication and fault tolerance. Computations, of course, run to completion in the presence of computational hosts that join and leave the ongoing computation. Such hosts, or producers, use task caching and prefetching to overlap computation with interprocessor communication. To break a potential task server bottleneck, a network of task servers is presented. Even though task servers are envisioned as reliable, the self-organizing, scalable network of n- servers, described as a sibling-connected height-balanced fat tree, tolerates a sequence of n-1 server failures. Tasks are distributed throughout the server network via a simple "diffusion" process. CX is intended as a test bed for research on automated silent auctions, reputation services, authentication services, and bonding services. CX also provides a test bed for algorithm research into network-based parallel computation.

  19. The Role of Networks in Cloud Computing

    Science.gov (United States)

    Lin, Geng; Devine, Mac

    The confluence of technology advancements and business developments in Broadband Internet, Web services, computing systems, and application software over the past decade has created a perfect storm for cloud computing. The "cloud model" of delivering and consuming IT functions as services is poised to fundamentally transform the IT industry and rebalance the inter-relationships among end users, enterprise IT, software companies, and the service providers in the IT ecosystem (Armbrust et al., 2009; Lin, Fu, Zhu, & Dasmalchi, 2009).

  20. Applying Web Services with Mobile Agents for Computer Network Management

    Directory of Open Access Journals (Sweden)

    Mydhili K.Nair

    2011-03-01

    Full Text Available The exponential rise in complexity of the underlying network elements of a computer network makes itsManagement an intricate, multifaceted and complex problem to solve. With every passing decade, newtechnologies are developed to ease this problem of Network Management. The last decade of the premillenniumera saw the peak of CORBA and Mobile Agent Based implementations, while the first decadeof post millennium saw the emergence of Web Services. All of these technologies evolved as independent,self-contained implementation streams. There is a genuine dearth in finding authentic research outcomeswhere quantifiable, measureable benefits of convergence of these technologies applied to NetworkManagement are put forth. This paper aims to fill this research gap. Here we put forth the experimentalresults obtained of a framework we developed in-house for Network Management that combined twoseemingly divergent distributed computing technologies, namely, Web Services and Mobile Agents.

  1. Coherent Computing with Injection-Locked Laser Network

    Science.gov (United States)

    Utsunomiya, S.; Wen, K.; Takata, K.; Tamate, S.; Yamamoto, Yoshihisa

    Combinatorial optimization problems are ubiquitous in our modern life. The classic examples include the protein folding in biology and medicine, the frequency assignment in wireless communications, traffic control and routing in air and on surface, microprocessor circuit design, computer vision and graph cut in machine learning, and social network control. They often belong to NP, NP-complete and NP-hard classes, for which modern digital computers and future quantum computers cannot find solutions efficiently, i.e. in polynomial time [1].

  2. Predictive Control of Networked Multiagent Systems via Cloud Computing.

    Science.gov (United States)

    Liu, Guo-Ping

    2017-01-18

    This paper studies the design and analysis of networked multiagent predictive control systems via cloud computing. A cloud predictive control scheme for networked multiagent systems (NMASs) is proposed to achieve consensus and stability simultaneously and to compensate for network delays actively. The design of the cloud predictive controller for NMASs is detailed. The analysis of the cloud predictive control scheme gives the necessary and sufficient conditions of stability and consensus of closed-loop networked multiagent control systems. The proposed scheme is verified to characterize the dynamical behavior and control performance of NMASs through simulations. The outcome provides a foundation for the development of cooperative and coordinative control of NMASs and its applications.

  3. State dependent computation using coupled recurrent networks

    CERN Document Server

    Rutishauser, Ueli

    2008-01-01

    Although conditional branching between possible behavioural states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem we demonstrate by theoretical analysis and simulation how networks of richly inter-connected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable robust finite state machines. We show how a multi-stable neuronal network containing a number of states can be created very simply, by coupling two recurrent networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogenous locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicted that state is withdrawn. In addition, a s...

  4. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  5. Associative Networks on a Massively Parallel Computer.

    Science.gov (United States)

    1985-10-01

    19 4.12 The Language Definition ....... ............... 21 4.1.3 Specialized Procedures and Functions... definition we give of associative networks eliminates words and meanings altogether in favor of numbers. It is assumed to be a function of a higher... lgbt (as a group of numbers, in this case), but this only leads to sensible queries when a statistical function is applied: "What is the largest salary

  6. A Novel Trusted Computing Model for Network Security Authentication

    Directory of Open Access Journals (Sweden)

    Ling Xing

    2014-02-01

    Full Text Available Network information poses great threats from malicious attacks due to the openness and virtuality of network structure. Traditional methods to ensure infor- mation security may fail when both integrity and source authentication for information are required. Based on the security of data broadcast channel, a novel Trusted Com- puting Model (TCM of network security authentication is proposed to enhance the security of network information. In this model, a method of Uniform content locator security Digital Certificate (UDC, which is capable of fully and uniquely index network information, is developed. Standard of MPEG-2 Transport Streams (TS is adopted to pack UDC data. Additionally, a UDC hashing algorithm (UHA512 is designed to compute the integrity and security of data infor- mation . Experimental results show that the proposed model is feasible and effective to network security authentication. 

  7. FY 1999 Blue Book: Computing, Information, and Communications: Networked Computing for the 21st Century

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — U.S.research and development R and D in computing, communications, and information technologies has enabled unprecedented scientific and engineering advances,...

  8. Development of Computer Science Disciplines - A Social Network Analysis Approach

    CERN Document Server

    Pham, Manh Cuong; Jarke, Matthias

    2011-01-01

    In contrast to many other scientific disciplines, computer science considers conference publications. Conferences have the advantage of providing fast publication of papers and of bringing researchers together to present and discuss the paper with peers. Previous work on knowledge mapping focused on the map of all sciences or a particular domain based on ISI published JCR (Journal Citation Report). Although this data covers most of important journals, it lacks computer science conference and workshop proceedings. That results in an imprecise and incomplete analysis of the computer science knowledge. This paper presents an analysis on the computer science knowledge network constructed from all types of publications, aiming at providing a complete view of computer science research. Based on the combination of two important digital libraries (DBLP and CiteSeerX), we study the knowledge network created at journal/conference level using citation linkage, to identify the development of sub-disciplines. We investiga...

  9. Developing Computer Network Based on EIGRP Performance Comparison and OSPF

    Directory of Open Access Journals (Sweden)

    Lalu Zazuli Azhar Mardedi

    2015-09-01

    Full Text Available One of the computer network systems technologies that are growing rapidly at this time is internet. In building the networks, a routing mechanism is needed to integrate the entire computer with a high degree of flexibility. Routing is a major part in giving a performance to the network. With many existing routing protocols, network administrators need a reference comparison of the performance of each type of the routing protocol. One such routing is Enhanced Interior Gateway Routing Protocol (EIGRP and Open Shortest Path First (OSPF. This paper only focuses on the performance of both the routing protocol on the network topology hybrid. Network services existing internet access speeds average of 8.0 KB/sec and 2 MB bandwidth. A backbone network is used by two academies, they are Academy of Information Management and Computer (AIMC and Academy of Secretary and Management (ASM, with 2041 clients and it caused slow internet access. To solve the problems, the analysis and comparison of performance between the Enhanced Interior Gateway Routing Protocol (EIGRP and Open Shortest Path First (OSPF will be applied. Simulation software Cisco Packet Tracer 6.0.1 is used to get the value and to verify the results of its use.

  10. A computational model for cancer growth by using complex networks

    Science.gov (United States)

    Galvão, Viviane; Miranda, José G. V.

    2008-09-01

    In this work we propose a computational model to investigate the proliferation of cancerous cell by using complex networks. In our model the network represents the structure of available space in the cancer propagation. The computational scheme considers a cancerous cell randomly included in the complex network. When the system evolves the cells can assume three states: proliferative, non-proliferative, and necrotic. Our results were compared with experimental data obtained from three human lung carcinoma cell lines. The computational simulations show that the cancerous cells have a Gompertzian growth. Also, our model simulates the formation of necrosis, increase of density, and resources diffusion to regions of lower nutrient concentration. We obtain that the cancer growth is very similar in random and small-world networks. On the other hand, the topological structure of the small-world network is more affected. The scale-free network has the largest rates of cancer growth due to hub formation. Finally, our results indicate that for different average degrees the rate of cancer growth is related to the available space in the network.

  11. Six networks on a universal neuromorphic computing substrate

    Directory of Open Access Journals (Sweden)

    Thomas ePfeil

    2013-02-01

    Full Text Available In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.

  12. Self-organized criticality in a computer network model

    Science.gov (United States)

    Yuan; Ren; Shan

    2000-02-01

    We study the collective behavior of computer network nodes by using a cellular automaton model. The results show that when the load of network is constant, the throughputs and buffer contents of nodes are power-law distributed in both space and time. Also the feature of 1/f noise appears in the power spectrum of the change of the number of nodes that bear a fixed part of the system load. It can be seen as yet another example of self-organized criticality. Power-law decay in the distribution of buffer contents implies that heavy network congestion occurs with small probability. The temporal power-law distribution for throughput might be a reasonable explanation for the observed self-similarity in computer network traffic.

  13. Wirelessly powered sensor networks and computational RFID

    CERN Document Server

    2013-01-01

    The Wireless Identification and Sensing Platform (WISP) is the first of a new class of RF-powered sensing and computing systems.  Rather than being powered by batteries, these sensor systems are powered by radio waves that are either deliberately broadcast or ambient.  Enabled by ongoing exponential improvements in the energy efficiency of microelectronics, RF-powered sensing and computing is rapidly moving along a trajectory from impossible (in the recent past), to feasible (today), toward practical and commonplace (in the near future). This book is a collection of key papers on RF-powered sensing and computing systems including the WISP.  Several of the papers grew out of the WISP Challenge, a program in which Intel Corporation donated WISPs to academic applicants who proposed compelling WISP-based projects.  The book also includes papers presented at the first WISP Summit, a workshop held in Berkeley, CA in association with the ACM Sensys conference, as well as other relevant papers. The book provides ...

  14. Computing Path Tables for Quickest Multipaths In Computer Networks

    Energy Technology Data Exchange (ETDEWEB)

    Grimmell, W.C.

    2004-12-21

    We consider the transmission of a message from a source node to a terminal node in a network with n nodes and m links where the message is divided into parts and each part is transmitted over a different path in a set of paths from the source node to the terminal node. Here each link is characterized by a bandwidth and delay. The set of paths together with their transmission rates used for the message is referred to as a multipath. We present two algorithms that produce a minimum-end-to-end message delay multipath path table that, for every message length, specifies a multipath that will achieve the minimum end-to-end delay. The algorithms also generate a function that maps the minimum end-to-end message delay to the message length. The time complexities of the algorithms are O(n{sup 2}((n{sup 2}/logn) + m)min(D{sub max}, C{sub max})) and O(nm(C{sub max} + nmin(D{sub max}, C{sub max}))) when the link delays and bandwidths are non-negative integers. Here D{sub max} and C{sub max} are respectively the maximum link delay and maximum link bandwidth and C{sub max} and D{sub max} are greater than zero.

  15. Wireless wearable network and wireless body-centric network for future wearable computer

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The wireless wearable network and wireless body-centric network can assistant to the user anywhere at anytime communicating with wireless components seamlessly. In this paper, the wireless wearable network and wireless body-centric network have been discussed, and the frequency band and human body effect has been estimated. The bluetooth and UWB technology can be used to construct the narrow band and the broad band wireless wearable network and wireless body-centric network separately. Further, the narrow band wireless wearable network and wireless body-centric network based on bluetooth technology has been constructed by integrated planar inverted-F antenna and the communication channel character has been studied by measurement. The results can provide the possibility of producing a prototype radio system that can be integrated with the wearable computers by suitable wireless technologies developed and applied to facilitate a reliable and continuous connectivity between the system units.

  16. Efficient Capacity Computation and Power Optimization for Relay Networks

    CERN Document Server

    Parvaresh, Farzad

    2011-01-01

    The capacity or approximations to capacity of various single-source single-destination relay network models has been characterized in terms of the cut-set upper bound. In principle, a direct computation of this bound requires evaluating the cut capacity over exponentially many cuts. We show that the minimum cut capacity of a relay network under some special assumptions can be cast as a minimization of a submodular function, and as a result, can be computed efficiently. We use this result to show that the capacity, or an approximation to the capacity within a constant gap for the Gaussian, wireless erasure, and Avestimehr-Diggavi-Tse deterministic relay network models can be computed in polynomial time. We present some empirical results showing that computing constant-gap approximations to the capacity of Gaussian relay networks with around 300 nodes can be done in order of minutes. For Gaussian networks, cut-set capacities are also functions of the powers assigned to the nodes. We consider a family of power o...

  17. Computational analysis of light scattering from collagen fiber networks

    Science.gov (United States)

    Arifler, Dizem; Pavlova, Ina; Gillenwater, Ann; Richards-Kortum, Rebecca

    2007-07-01

    Neoplastic progression in epithelial tissues is accompanied by structural and morphological changes in the stromal collagen matrix. We used the Finite-Difference Time-Domain (FDTD) method, a popular computational technique for full-vector solution of complex problems in electromagnetics, to establish a relationship between structural properties of collagen fiber networks and light scattering, and to analyze how neoplastic changes alter stromal scattering properties. To create realistic collagen network models, we acquired optical sections from the stroma of fresh normal and neoplastic oral cavity biopsies using fluorescence confocal microscopy. These optical sections were then processed to construct three-dimensional collagen networks of different sizes as FDTD model input. Image analysis revealed that volume fraction of collagen fibers in the stroma decreases with neoplastic progression, and statistical texture features computed suggest that fibers tend to be more disconnected in neoplastic stroma. The FDTD modeling results showed that neoplastic fiber networks have smaller scattering cross-sections compared to normal networks of the same size, whereas high-angle scattering probabilities tend to be higher for neoplastic networks. Characterization of stromal scattering is expected to provide a basis to better interpret spectroscopic optical signals and to develop more reliable computational models to describe photon propagation in epithelial tissues.

  18. Energy Aware Computing in Cooperative Wireless Networks

    DEFF Research Database (Denmark)

    Olsen, Anders Brødløs; Fitzek, Frank H. P.; Koch, Peter

    2005-01-01

    In this work the idea of cooperation is applied to wireless communication systems. It is generally accepted that energy consumption is a significant design constraint for mobile handheld systems. We propose a novel method of cooperative task computing by distributing tasks among terminals over...... the unreliable wireless link. Principles of multi–processor energy aware task scheduling are used exploiting performance scalable technologies such as Dynamic Voltage Scaling (DVS). We introduce a novel mechanism referred to as D2VS and here it is shown by means of simulation that savings of 40% can be achieved....

  19. Machine learning based Intelligent cognitive network using fog computing

    Science.gov (United States)

    Lu, Jingyang; Li, Lun; Chen, Genshe; Shen, Dan; Pham, Khanh; Blasch, Erik

    2017-05-01

    In this paper, a Cognitive Radio Network (CRN) based on artificial intelligence is proposed to distribute the limited radio spectrum resources more efficiently. The CRN framework can analyze the time-sensitive signal data close to the signal source using fog computing with different types of machine learning techniques. Depending on the computational capabilities of the fog nodes, different features and machine learning techniques are chosen to optimize spectrum allocation. Also, the computing nodes send the periodic signal summary which is much smaller than the original signal to the cloud so that the overall system spectrum source allocation strategies are dynamically updated. Applying fog computing, the system is more adaptive to the local environment and robust to spectrum changes. As most of the signal data is processed at the fog level, it further strengthens the system security by reducing the communication burden of the communications network.

  20. High-speed packet switching network to link computers

    CERN Document Server

    Gerard, F M

    1980-01-01

    Virtually all of the experiments conducted at CERN use minicomputers today; some simply acquire data and store results on magnetic tape while others actually control experiments and help to process the resulting data. Currently there are more than two hundred minicomputers being used in the laboratory. In order to provide the minicomputer users with access to facilities available on mainframes and also to provide intercommunication between various experimental minicomputers, CERN opted for a packet switching network back in 1975. It was decided to use Modcomp II computers as switching nodes. The only software to be taken was a communications-oriented operating system called Maxcom. Today eight Modcomp II 16-bit computers plus six newer Classic minicomputers from Modular Computer Services have been purchased for the CERNET data communications networks. The current configuration comprises 11 nodes connecting more than 40 user machines to one another and to the laboratory's central computing facility. (0 refs).

  1. Test experience on an ultrareliable computer communication network

    Science.gov (United States)

    Abbott, L. W.

    1984-01-01

    The dispersed sensor processing mesh (DSPM) is an experimental, ultra-reliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.

  2. Navigating traditional chinese medicine network pharmacology and computational tools.

    Science.gov (United States)

    Yang, Ming; Chen, Jia-Lei; Xu, Li-Wen; Ji, Guang

    2013-01-01

    The concept of "network target" has ushered in a new era in the field of traditional Chinese medicine (TCM). As a new research approach, network pharmacology is based on the analysis of network models and systems biology. Taking advantage of advancements in systems biology, a high degree of integration data analysis strategy and interpretable visualization provides deeper insights into the underlying mechanisms of TCM theories, including the principles of herb combination, biological foundations of herb or herbal formulae action, and molecular basis of TCM syndromes. In this study, we review several recent developments in TCM network pharmacology research and discuss their potential for bridging the gap between traditional and modern medicine. We briefly summarize the two main functional applications of TCM network models: understanding/uncovering and predicting/discovering. In particular, we focus on how TCM network pharmacology research is conducted and highlight different computational tools, such as network-based and machine learning algorithms, and sources that have been proposed and applied to the different steps involved in the research process. To make network pharmacology research commonplace, some basic network definitions and analysis methods are presented.

  3. Navigating Traditional Chinese Medicine Network Pharmacology and Computational Tools

    Directory of Open Access Journals (Sweden)

    Ming Yang

    2013-01-01

    Full Text Available The concept of “network target” has ushered in a new era in the field of traditional Chinese medicine (TCM. As a new research approach, network pharmacology is based on the analysis of network models and systems biology. Taking advantage of advancements in systems biology, a high degree of integration data analysis strategy and interpretable visualization provides deeper insights into the underlying mechanisms of TCM theories, including the principles of herb combination, biological foundations of herb or herbal formulae action, and molecular basis of TCM syndromes. In this study, we review several recent developments in TCM network pharmacology research and discuss their potential for bridging the gap between traditional and modern medicine. We briefly summarize the two main functional applications of TCM network models: understanding/uncovering and predicting/discovering. In particular, we focus on how TCM network pharmacology research is conducted and highlight different computational tools, such as network-based and machine learning algorithms, and sources that have been proposed and applied to the different steps involved in the research process. To make network pharmacology research commonplace, some basic network definitions and analysis methods are presented.

  4. Computational properties of networks of synchronous groups of spiking neurons.

    Science.gov (United States)

    Dayhoff, Judith E

    2007-09-01

    We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.

  5. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    Science.gov (United States)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  6. High Energy Physics Experiments In Grid Computing Networks

    Directory of Open Access Journals (Sweden)

    Andrzej Olszewski

    2008-01-01

    Full Text Available The demand for computing resources used for detector simulations and data analysis in HighEnergy Physics (HEP experiments is constantly increasing due to the development of studiesof rare physics processes in particle interactions. The latest generation of experiments at thenewly built LHC accelerator at CERN in Geneva is planning to use computing networks fortheir data processing needs. A Worldwide LHC Computing Grid (WLCG organization hasbeen created to develop a Grid with properties matching the needs of these experiments. Inthis paper we present the use of Grid computing by HEP experiments and describe activitiesat the participating computing centers with the case of Academic Computing Center, ACKCyfronet AGH, Kraków, Poland.

  7. Small-world networks in neuronal populations: a computational perspective.

    Science.gov (United States)

    Zippo, Antonio G; Gelsomino, Giuliana; Van Duin, Pieter; Nencini, Sara; Caramenti, Gian Carlo; Valente, Maurizio; Biella, Gabriele E M

    2013-08-01

    The analysis of the brain in terms of integrated neural networks may offer insights on the reciprocal relation between structure and information processing. Even with inherent technical limits, many studies acknowledge neuron spatial arrangements and communication modes as key factors. In this perspective, we investigated the functional organization of neuronal networks by explicitly assuming a specific functional topology, the small-world network. We developed two different computational approaches. Firstly, we asked whether neuronal populations actually express small-world properties during a definite task, such as a learning task. For this purpose we developed the Inductive Conceptual Network (ICN), which is a hierarchical bio-inspired spiking network, capable of learning invariant patterns by using variable-order Markov models implemented in its nodes. As a result, we actually observed small-world topologies during learning in the ICN. Speculating that the expression of small-world networks is not solely related to learning tasks, we then built a de facto network assuming that the information processing in the brain may occur through functional small-world topologies. In this de facto network, synchronous spikes reflected functional small-world network dependencies. In order to verify the consistency of the assumption, we tested the null-hypothesis by replacing the small-world networks with random networks. As a result, only small world networks exhibited functional biomimetic characteristics such as timing and rate codes, conventional coding strategies and neuronal avalanches, which are cascades of bursting activities with a power-law distribution. Our results suggest that small-world functional configurations are liable to underpin brain information processing at neuronal level.

  8. Identifying failure in a tree network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  9. A computational study of routing algorithms for realistic transportation networks

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, R.; Marathe, M.V.; Nagel, K.

    1998-12-01

    The authors carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (Transportation Analysis and Simulation System) project. The main focus of the paper is to study how various heuristic and exact solutions, associated data structures affected the computational performance of the software developed especially for realistic transportation networks. For this purpose the authors have used Dallas Fort-Worth road network with very high degree of resolution. The following general results are obtained: (1) they discuss and experimentally analyze various one-one shortest path algorithms, which include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure of the input instances; (2) they describe a number of extensions to the basic shortest path algorithm. These extensions were primarily motivated by practical problems arising in TRANSIMS and ITS (Intelligent Transportation Systems) related technologies. Extensions discussed include--(i) time dependent networks, (ii) multi-modal networks, (iii) networks with public transportation and associated schedules. Computational results are provided to empirically compare the efficiency of various algorithms. The studies indicate that a modified Dijkstra`s algorithm is computationally fast and an excellent candidate for use in various transportation planning applications as well as ITS related technologies.

  10. Computation of gradually varied flow in compound open channel networks

    Indian Academy of Sciences (India)

    H Prashanth Reddy; M Hanif Chaudhry; Jasim Imran

    2014-12-01

    Although, natural channels are rarely rectangular or trapezoidal in cross section, these cross sections are assumed for the computation of steady, gradually varied flow in open channel networks. The accuracy of the computed results, therefore, becomes questionable due to differences in the hydraulic and geometric characteristics of the main channel and floodplains. To overcome these limitations, an algorithm is presented in this paper to compute steady, gradually varied flow in an open-channel network with compound cross sections. As compared to the presently available methods, the methodology is more general and suitable for application to compound and trapezoidal channel cross sections in series channels, tree-type or looped networks. In this method, the energy and continuity equations are solved for steady, gradually varied flow by the Newton–Raphson method and the proposed methodology is applied to tree-type and looped-channel networks. An algorithm is presented to determine multiple critical depths in a compound channel. Modifications in channel geometry are presented to avoid the occurrence of multiple critical depths. The occurrence of only one critical depth in a compound cross section with modified geometry is demonstrated for a tree-type channel network.

  11. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  12. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  13. Using a Local Area Network to Teach Computer Revision Skills.

    Science.gov (United States)

    Thompson, Diane P.

    1989-01-01

    Describes the use of a local area network and video switching equipment in teaching revision skills on computer. Explains that reading stories from texts, rewriting them from differing character viewpoints, and editing them as a group exposed students to a variety of writing problems and stimulated various revision strategies. (SG)

  14. Computational neural networks driving complex analytical problem solving.

    Science.gov (United States)

    Hanrahan, Grady

    2010-06-01

    Neural network computing demonstrates advanced analytical problem solving abilities to meet the demands of modern chemical research. (To listen to a podcast about this article, please go to the Analytical Chemistry multimedia page at pubs.acs.org/page/ancham/audio/index.html .).

  15. Efficient Computation of Distance Sketches in Distributed Networks

    CERN Document Server

    Sarma, Atish Das; Pandurangan, Gopal

    2011-01-01

    Distance computation is one of the most fundamental primitives used in communication networks. The cost of effectively and accurately computing pairwise network distances can become prohibitive in large-scale networks such as the Internet and Peer-to-Peer (P2P) networks. To negotiate the rising need for very efficient distance computation, approximation techniques for numerous variants of this question have recently received significant attention in the literature. The goal is to preprocess the graph and store a small amount of information such that whenever a query for any pairwise distance is issued, the distance can be well approximated (i.e., with small stretch) very quickly in an online fashion. Specifically, the pre-processing (usually) involves storing a small sketch with each node, such that at query time only the sketches of the concerned nodes need to be looked up to compute the approximate distance. In this paper, we present the first theoretical study of distance sketches derived from distance ora...

  16. Fish species recognition using computer vision and a neural network

    NARCIS (Netherlands)

    Storbeck, F.; Daan, B.

    2001-01-01

    A system is described to recognize fish species by computer vision and a neural network program. The vision system measures a number of features of fish as seen by a camera perpendicular to a conveyor belt. The features used here are the widths and heights at various locations along the fish. First

  17. The Poor Man's Guide to Computer Networks and their Applications

    DEFF Research Database (Denmark)

    Sharp, Robin

    2003-01-01

    These notes for DTU course 02220, Concurrent Programming, give an introduction to computer networks, with focus on the modern Internet. Basic Internet protocols such as IP, TCP and UDP are presented, and two Internet application protocols, SMTP and HTTP, are described in some detail. Techniques...

  18. Improving a Computer Networks Course Using the Partov Simulation Engine

    Science.gov (United States)

    Momeni, B.; Kharrazi, M.

    2012-01-01

    Computer networks courses are hard to teach as there are many details in the protocols and techniques involved that are difficult to grasp. Employing programming assignments as part of the course helps students to obtain a better understanding and gain further insight into the theoretical lectures. In this paper, the Partov simulation engine and…

  19. Improving a Computer Networks Course Using the Partov Simulation Engine

    Science.gov (United States)

    Momeni, B.; Kharrazi, M.

    2012-01-01

    Computer networks courses are hard to teach as there are many details in the protocols and techniques involved that are difficult to grasp. Employing programming assignments as part of the course helps students to obtain a better understanding and gain further insight into the theoretical lectures. In this paper, the Partov simulation engine and…

  20. Computing Nash Equilibrium in Wireless Ad Hoc Networks

    DEFF Research Database (Denmark)

    Bulychev, Peter E.; David, Alexandre; Larsen, Kim G.

    2012-01-01

    This paper studies the problem of computing Nash equilibrium in wireless networks modeled by Weighted Timed Automata. Such formalism comes together with a logic that can be used to describe complex features such as timed energy constraints. Our contribution is a method for solving this problem us...

  1. An Analysis of Attitudes toward Computer Networks and Internet Addiction.

    Science.gov (United States)

    Tsai, Chin-Chung; Lin, Sunny S. J.

    The purpose of this study was to explore the interplay between young people's attitudes toward computer networks and Internet addiction. After analyzing questionnaire responses of an initial sample of 615 Taiwanese high school students, 78 subjects, viewed as possible Internet addicts, were selected for further explorations. It was found that…

  2. Computer-Supported Modelling of Multi modal Transportation Networks Rationalization

    Directory of Open Access Journals (Sweden)

    Ratko Zelenika

    2007-09-01

    Full Text Available This paper deals with issues of shaping and functioning ofcomputer programs in the modelling and solving of multimoda Itransportation network problems. A methodology of an integrateduse of a programming language for mathematical modellingis defined, as well as spreadsheets for the solving of complexmultimodal transportation network problems. The papercontains a comparison of the partial and integral methods ofsolving multimodal transportation networks. The basic hypothesisset forth in this paper is that the integral method results inbetter multimodal transportation network rationalization effects,whereas a multimodal transportation network modelbased on the integral method, once built, can be used as the basisfor all kinds of transportation problems within multimodaltransport. As opposed to linear transport problems, multimodaltransport network can assume very complex shapes. This papercontains a comparison of the partial and integral approach totransp01tation network solving. In the partial approach, astraightforward model of a transp01tation network, which canbe solved through the use of the Solver computer tool within theExcel spreadsheet inteiface, is quite sufficient. In the solving ofa multimodal transportation problem through the integralmethod, it is necessmy to apply sophisticated mathematicalmodelling programming languages which supp01t the use ofcomplex matrix functions and the processing of a vast amountof variables and limitations. The LINGO programming languageis more abstract than the Excel spreadsheet, and it requiresa certain programming knowledge. The definition andpresentation of a problem logic within Excel, in a manner whichis acceptable to computer software, is an ideal basis for modellingin the LINGO programming language, as well as a fasterand more effective implementation of the mathematical model.This paper provides proof for the fact that it is more rational tosolve the problem of multimodal transportation networks by

  3. Computational neuropsychiatry – schizophrenia as a cognitive brain network disorder

    Directory of Open Access Journals (Sweden)

    Maria R Dauvermann

    2014-03-01

    Full Text Available Computational modelling of functional brain networks has advanced the understanding of higher cognitive function. It is hypothesised that functional networks mediating higher cognitive processes are disrupted in people with schizophrenia. In this article, we review studies that applied measures of functional and effective connectivity to fMRI data during cognitive tasks, in particular working memory fMRI studies. We provide a conceptual summary of the main findings in fMRI data and their relationship with neurotransmitter systems, which are known to be altered in individuals with schizophrenia. We consider possible developments in computational neuropsychiatry, which are likely to further our understanding of how functional networks are altered in schizophrenia.

  4. Computer network time synchronization the network time protocol on earth and in space

    CERN Document Server

    Mills, David L

    2010-01-01

    Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib

  5. Optimal computation of symmetric Boolean functions in Tree networks

    CERN Document Server

    Kowshik, Hemant

    2010-01-01

    In this paper, we address the scenario where nodes with sensor data are connected in a tree network, and every node wants to compute a given symmetric Boolean function of the sensor data. We first consider the problem of computing a function of two nodes with integer measurements. We allow for block computation to enhance data fusion efficiency, and determine the minimum worst-case total number of bits to be exchanged to perform the desired computation. We establish lower bounds using fooling sets, and provide a novel scheme which attains the lower bounds, using information theoretic tools. For a class of functions called sum-threshold functions, this scheme is shown to be optimal. We then turn to tree networks and derive a lower bound for the number of bits exchanged on each link by viewing it as a two node problem. We show that the protocol of recursive innetwork aggregation achieves this lower bound in the case of sumthreshold functions. Thus we have provided a communication and in-network computation stra...

  6. 计算机网络安全技术%Security Technologies of Computer Network

    Institute of Scientific and Technical Information of China (English)

    罗明宇; 卢锡城; 卢泽新; 韩亚欣

    2000-01-01

    With the development of computer network,requirements of computer network security have been more and more urgent. In tills paper, goals of network security are reviewed. Several network attack methods,such as interruption,interception, modification, fabrication,are studied. Network security technologies,such as security mechan!sm,encryption,security detection,firewall,were discussed.

  7. Connect the dot: Computing feed-links for network extension

    Directory of Open Access Journals (Sweden)

    Boris Aronov

    2011-12-01

    Full Text Available Road network analysis can require distance from points that are not on the network themselves. We study the algorithmic problem of connecting a point inside a face (region of the road network to its boundary while minimizing the detour factor of that point to any point on the boundary of the face. We show that the optimal single connection (feed-link can be computed in O(lambda_7(n log n time, where n is the number of vertices that bounds the face and lambda_7(n is the slightly superlinear maximum length of a Davenport-Schinzel sequence of order 7 on n symbols. We also present approximation results for placing more feed-links, deal with the case that there are obstacles in the face of the road network that contains the point to be connected, and present various related results.

  8. Global tree network for computing structures enabling global processing operations

    Science.gov (United States)

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  9. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networks.  Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  10. Applying DNA computation to intractable problems in social network analysis.

    Science.gov (United States)

    Chen, Rick C S; Yang, Stephen J H

    2010-09-01

    From ancient times to the present day, social networks have played an important role in the formation of various organizations for a range of social behaviors. As such, social networks inherently describe the complicated relationships between elements around the world. Based on mathematical graph theory, social network analysis (SNA) has been developed in and applied to various fields such as Web 2.0 for Web applications and product developments in industries, etc. However, some definitions of SNA, such as finding a clique, N-clique, N-clan, N-club and K-plex, are NP-complete problems, which are not easily solved via traditional computer architecture. These challenges have restricted the uses of SNA. This paper provides DNA-computing-based approaches with inherently high information density and massive parallelism. Using these approaches, we aim to solve the three primary problems of social networks: N-clique, N-clan, and N-club. Their accuracy and feasible time complexities discussed in the paper will demonstrate that DNA computing can be used to facilitate the development of SNA.

  11. Efficient parameter sensitivity computation for spatially extended reaction networks

    Science.gov (United States)

    Lester, C.; Yates, C. A.; Baker, R. E.

    2017-01-01

    Reaction-diffusion models are widely used to study spatially extended chemical reaction systems. In order to understand how the dynamics of a reaction-diffusion model are affected by changes in its input parameters, efficient methods for computing parametric sensitivities are required. In this work, we focus on the stochastic models of spatially extended chemical reaction systems that involve partitioning the computational domain into voxels. Parametric sensitivities are often calculated using Monte Carlo techniques that are typically computationally expensive; however, variance reduction techniques can decrease the number of Monte Carlo simulations required. By exploiting the characteristic dynamics of spatially extended reaction networks, we are able to adapt existing finite difference schemes to robustly estimate parametric sensitivities in a spatially extended network. We show that algorithmic performance depends on the dynamics of the given network and the choice of summary statistics. We then describe a hybrid technique that dynamically chooses the most appropriate simulation method for the network of interest. Our method is tested for functionality and accuracy in a range of different scenarios.

  12. Hand Gesture and Neural Network Based Human Computer Interface

    Directory of Open Access Journals (Sweden)

    Aekta Patel

    2014-06-01

    Full Text Available Computer is used by every people either at their work or at home. Our aim is to make computers that can understand human language and can develop a user friendly human computer interfaces (HCI. Human gestures are perceived by vision. The research is for determining human gestures to create an HCI. Coding of these gestures into machine language demands a complex programming algorithm. In this project, We have first detected, recognized and pre-processing the hand gestures by using General Method of recognition. Then We have found the recognized image’s properties and using this, mouse movement, click and VLC Media player controlling are done. After that we have done all these functions thing using neural network technique and compared with General recognition method. From this we can conclude that neural network technique is better than General Method of recognition. In this, I have shown the results based on neural network technique and comparison between neural network method & general method.

  13. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments.

  14. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  15. Computing Tutte polynomials of contact networks in classrooms

    Science.gov (United States)

    Hincapié, Doracelly; Ospina, Juan

    2013-05-01

    Objective: The topological complexity of contact networks in classrooms and the potential transmission of an infectious disease were analyzed by sex and age. Methods: The Tutte polynomials, some topological properties and the number of spanning trees were used to algebraically compute the topological complexity. Computations were made with the Maple package GraphTheory. Published data of mutually reported social contacts within a classroom taken from primary school, consisting of children in the age ranges of 4-5, 7-8 and 10-11, were used. Results: The algebraic complexity of the Tutte polynomial and the probability of disease transmission increases with age. The contact networks are not bipartite graphs, gender segregation was observed especially in younger children. Conclusion: Tutte polynomials are tools to understand the topology of the contact networks and to derive numerical indexes of such topologies. It is possible to establish relationships between the Tutte polynomial of a given contact network and the potential transmission of an infectious disease within such network

  16. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  17. Reducing Computational Overhead of Network Coding with Intrinsic Information Conveying

    DEFF Research Database (Denmark)

    Heide, Janus; Zhang, Qi; Pedersen, Morten V.;

    is RLNC (Random Linear Network Coding) and the goal is to reduce the amount of coding operations both at the coding and decoding node, and at the same time remove the need for dedicated signaling messages. In a traditional RLNC system, coding operation takes up significant computational resources and adds......This paper investigated the possibility of intrinsic information conveying in network coding systems. The information is embedded into the coding vector by constructing the vector based on a set of predefined rules. This information can subsequently be retrieved by any receiver. The starting point...

  18. Dynamic Routing Protocol for Computer Networks with Clustering Topology

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    This paper presents a hierarchical dynamic routing protocol (HDRP) based on the discrete dynamic programming principle. The proposed protocol can adapt to the dynamic and large computer networks (DLCN) with clustering topology. The procedures for realizing routing update and decision are presented in this paper. The proof of correctness and complexity analysis of the protocol are also made. The performance measures of the HDRP including throughput and average message delay are evaluated by using of simulation. The study shows that the HDRP provides a new available approach to the routing decision for DLCN or high speed networks with clustering topology.

  19. Smart photonic networks and computer security for image data

    Science.gov (United States)

    Campello, Jorge; Gill, John T.; Morf, Martin; Flynn, Michael J.

    1998-02-01

    Work reported here is part of a larger project on 'Smart Photonic Networks and Computer Security for Image Data', studying the interactions of coding and security, switching architecture simulations, and basic technologies. Coding and security: coding methods that are appropriate for data security in data fusion networks were investigated. These networks have several characteristics that distinguish them form other currently employed networks, such as Ethernet LANs or the Internet. The most significant characteristics are very high maximum data rates; predominance of image data; narrowcasting - transmission of data form one source to a designated set of receivers; data fusion - combining related data from several sources; simple sensor nodes with limited buffering. These characteristics affect both the lower level network design and the higher level coding methods.Data security encompasses privacy, integrity, reliability, and availability. Privacy, integrity, and reliability can be provided through encryption and coding for error detection and correction. Availability is primarily a network issue; network nodes must be protected against failure or routed around in the case of failure. One of the more promising techniques is the use of 'secret sharing'. We consider this method as a special case of our new space-time code diversity based algorithms for secure communication. These algorithms enable us to exploit parallelism and scalable multiplexing schemes to build photonic network architectures. A number of very high-speed switching and routing architectures and their relationships with very high performance processor architectures were studied. Indications are that routers for very high speed photonic networks can be designed using the very robust and distributed TCP/IP protocol, if suitable processor architecture support is available.

  20. Computer, Network, Software, and Hardware Engineering with Applications

    CERN Document Server

    Schneidewind, Norman F

    2012-01-01

    There are many books on computers, networks, and software engineering but none that integrate the three with applications. Integration is important because, increasingly, software dominates the performance, reliability, maintainability, and availability of complex computer and systems. Books on software engineering typically portray software as if it exists in a vacuum with no relationship to the wider system. This is wrong because a system is more than software. It is comprised of people, organizations, processes, hardware, and software. All of these components must be considered in an integr

  1. Assessing Database and Network Threats in Traditional and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Katerina Lourida

    2015-05-01

    Full Text Available Cloud Computing is currently one of the most widely-spoken terms in IT. While it offers a range of technological and financial benefits, its wide acceptance by organizations is not yet wide spread. Security concerns are a main reason for this and this paper studies the data and network threats posed in both traditional and cloud paradigms in an effort to assert in which areas cloud computing addresses security issues and where it does introduce new ones. This evaluation is based on Microsoft’s STRIDE threat model and discusses the stakeholders, the impact and recommendations for tackling each threat.

  2. Advances in neural networks computational intelligence for ICT

    CERN Document Server

    Esposito, Anna; Morabito, Francesco; Pasero, Eros

    2016-01-01

    This carefully edited book is putting emphasis on computational and artificial intelligent methods for learning and their relative applications in robotics, embedded systems, and ICT interfaces for psychological and neurological diseases. The book is a follow-up of the scientific workshop on Neural Networks (WIRN 2015) held in Vietri sul Mare, Italy, from the 20th to the 22nd of May 2015. The workshop, at its 27th edition became a traditional scientific event that brought together scientists from many countries, and several scientific disciplines. Each chapter is an extended version of the original contribution presented at the workshop, and together with the reviewers’ peer revisions it also benefits from the live discussion during the presentation. The content of book is organized in the following sections. 1. Introduction, 2. Machine Learning, 3. Artificial Neural Networks: Algorithms and models, 4. Intelligent Cyberphysical and Embedded System, 5. Computational Intelligence Methods for Biomedical ICT in...

  3. A Token Based Algorithm to Distributed Computation in Sensor Networks

    CERN Document Server

    Saligrama, Venkatesh

    2011-01-01

    We consider distributed algorithms for data aggregation and function computation in sensor networks. The algorithms perform pairwise computations along edges of an underlying communication graph. A token is associated with each sensor node, which acts as a transmission permit. Nodes with active tokens have transmission permits; they generate messages at a constant rate and send each message to a randomly selected neighbor. By using different strategies to control the transmission permits we can obtain tradeoffs between message and time complexity. Gossip corresponds to the case when all nodes have permits all the time. We study algorithms where permits are revoked after transmission and restored upon reception. Examples of such algorithms include Simple-Random Walk(SRW), Coalescent-Random-Walk(CRW) and Controlled Flooding(CFLD) and their hybrid variants. SRW has a single node permit, which is passed on in the network. CRW, initially initially has a permit for each node but these permits are revoked gradually....

  4. Multi-objective optimization in computer networks using metaheuristics

    CERN Document Server

    Donoso, Yezid

    2007-01-01

    Metaheuristics are widely used to solve important practical combinatorial optimization problems. Many new multicast applications emerging from the Internet-such as TV over the Internet, radio over the Internet, and multipoint video streaming-require reduced bandwidth consumption, end-to-end delay, and packet loss ratio. It is necessary to design and to provide for these kinds of applications as well as for those resources necessary for functionality. Multi-Objective Optimization in Computer Networks Using Metaheuristics provides a solution to the multi-objective problem in routing computer networks. It analyzes layer 3 (IP), layer 2 (MPLS), and layer 1 (GMPLS and wireless functions). In particular, it assesses basic optimization concepts, as well as several techniques and algorithms for the search of minimals; examines the basic multi-objective optimization concepts and the way to solve them through traditional techniques and through several metaheuristics; and demonstrates how to analytically model the compu...

  5. Computers and networks in the age of globalization

    DEFF Research Database (Denmark)

    Bloch Rasmussen, Leif; Beardon, Colin; Munari, Silvio

    In modernity, an individual identity was constituted from civil society, while in a globalized network society, human identity, if it develops at all, must grow from communal resistance. A communal resistance to an abstract conceptualized world, where there is no possibility for perception...... in a network society; the individual and knowledge-based organizations; human responsibility and technology; and exclusion and regeneration. This volume contains the edited proceedings of the Fifth World Conference on Human Choice and Computers (HCC-5), which was sponsored by the International Federation...... for Information Processing (IFIP) and held in Geneva, Switzerland in August 1998. Since the first HCC conference in 1974, IFIP's Technical Committee 9 has endeavoured to set the agenda for human choices and human actions vis-a-vis computers....

  6. Computer simulation of randomly cross-linked polymer networks

    CERN Document Server

    Williams, T P

    2002-01-01

    In this work, Monte Carlo and Stochastic Dynamics computer simulations of mesoscale model randomly cross-linked networks were undertaken. Task parallel implementations of the lattice Monte Carlo Bond Fluctuation model and Kremer-Grest Stochastic Dynamics bead-spring continuum model were designed and used for this purpose. Lattice and continuum precursor melt systems were prepared and then cross-linked to varying degrees. The resultant networks were used to study structural changes during deformation and relaxation dynamics. The effects of a random network topology featuring a polydisperse distribution of strand lengths and an abundance of pendant chain ends, were qualitatively compared to recent published work. A preliminary investigation into the effects of temperature on the structural and dynamical properties was also undertaken. Structural changes during isotropic swelling and uniaxial deformation, revealed a pronounced non-affine deformation dependant on the degree of cross-linking. Fractal heterogeneiti...

  7. Computational study of noise in a large signal transduction network

    Directory of Open Access Journals (Sweden)

    Ruohonen Keijo

    2011-06-01

    Full Text Available Abstract Background Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. Results We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. Conclusions We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies.

  8. Computing Nash Equilibrium in Wireless Ad Hoc Networks

    DEFF Research Database (Denmark)

    Bulychev, Peter E.; David, Alexandre; Larsen, Kim G.

    2012-01-01

    This paper studies the problem of computing Nash equilibrium in wireless networks modeled by Weighted Timed Automata. Such formalism comes together with a logic that can be used to describe complex features such as timed energy constraints. Our contribution is a method for solving this problem...... using Statistical Model Checking. The method has been implemented in UPPAAL model checker and has been applied to the analysis of Aloha CSMA/CD and IEEE 802.15.4 CSMA/CA protocols....

  9. Cellular computational networks--a scalable architecture for learning the dynamics of large networked systems.

    Science.gov (United States)

    Luitel, Bipul; Venayagamoorthy, Ganesh Kumar

    2014-02-01

    Neural networks for implementing large networked systems such as smart electric power grids consist of multiple inputs and outputs. Many outputs lead to a greater number of parameters to be adapted. Each additional variable increases the dimensionality of the problem and hence learning becomes a challenge. Cellular computational networks (CCNs) are a class of sparsely connected dynamic recurrent networks (DRNs). By proper selection of a set of input elements for each output variable in a given application, a DRN can be modified into a CCN which significantly reduces the complexity of the neural network and allows use of simple training methods for independent learning in each cell thus making it scalable. This article demonstrates this concept of developing a CCN using dimensionality reduction in a DRN for scalability and better performance. The concept has been analytically explained and empirically verified through application. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. An Optimal Path Computation Architecture for the Cloud-Network on Software-Defined Networking

    Directory of Open Access Journals (Sweden)

    Hyunhun Cho

    2015-05-01

    Full Text Available Legacy networks do not open the precise information of the network domain because of scalability, management and commercial reasons, and it is very hard to compute an optimal path to the destination. According to today’s ICT environment change, in order to meet the new network requirements, the concept of software-defined networking (SDN has been developed as a technological alternative to overcome the limitations of the legacy network structure and to introduce innovative concepts. The purpose of this paper is to propose the application that calculates the optimal paths for general data transmission and real-time audio/video transmission, which consist of the major services of the National Research & Education Network (NREN in the SDN environment. The proposed SDN routing computation (SRC application is designed and applied in a multi-domain network for the efficient use of resources, selection of the optimal path between the multi-domains and optimal establishment of end-to-end connections.

  11. A computational tool for quantitative analysis of vascular networks.

    Directory of Open Access Journals (Sweden)

    Enrique Zudaire

    Full Text Available Angiogenesis is the generation of mature vascular networks from pre-existing vessels. Angiogenesis is crucial during the organism' development, for wound healing and for the female reproductive cycle. Several murine experimental systems are well suited for studying developmental and pathological angiogenesis. They include the embryonic hindbrain, the post-natal retina and allantois explants. In these systems vascular networks are visualised by appropriate staining procedures followed by microscopical analysis. Nevertheless, quantitative assessment of angiogenesis is hampered by the lack of readily available, standardized metrics and software analysis tools. Non-automated protocols are being used widely and they are, in general, time--and labour intensive, prone to human error and do not permit computation of complex spatial metrics. We have developed a light-weight, user friendly software, AngioTool, which allows for quick, hands-off and reproducible quantification of vascular networks in microscopic images. AngioTool computes several morphological and spatial parameters including the area covered by a vascular network, the number of vessels, vessel length, vascular density and lacunarity. In addition, AngioTool calculates the so-called "branching index" (branch points/unit area, providing a measurement of the sprouting activity of a specimen of interest. We have validated AngioTool using images of embryonic murine hindbrains, post-natal retinas and allantois explants. AngioTool is open source and can be downloaded free of charge.

  12. Convolutional networks for fast, energy-efficient neuromorphic computing.

    Science.gov (United States)

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  13. Logical Networks: Towards Foundations for Programmable Overlay Networks and Overlay Computing Systems

    OpenAIRE

    Liquori, Luigi; Cosnard, Michel

    2007-01-01

    International audience; We propose and discuss foundations for programmable overlay networks and overlay computing systems. Such overlays are built over a large number of distributed computational individuals, virtually organized in colonies, and ruled by a leader (broker) who is elected or imposed by system administrators. Every individual asks the broker to log in the colony by declaring the resources that can be offered (with variable guarantees). Once logged in, an individual can ask the ...

  14. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    Science.gov (United States)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  15. An Effective Data Representation and Computation Scheme in Computer Simulation for Neural Networks

    Institute of Scientific and Technical Information of China (English)

    CHENHoujin; YUANBaozong

    2004-01-01

    A Biological neural network (BNN) is composed of a vast number of neurons interconnected by synapses. It has the ability to process information and generate a specific pattern of electrical activity. To analyze its interior structure and exterior properties, computational models were combined with experimental data and one computer simulation system was implemented. As BNN is a complicated nonlinear system and the simulation deals with a great amount of numeric computations,so data representation and computation scheme are critical to simulation process. In this paper, Object-oriented data representation (OODR) was designed to have sharable and reusable properties, and one novel hybrid computation scheme was presented. With OODR, data share and computation share were simultaneously achieved. According to the hybrid computation scheme, individual computation method was applied to corresponding object based on its model characteristics and the computation efficiency was obviously increased. Now they were adopted in one BNN simulation system which was implemented in platform independent language JAVA. As the simulation system took advantage of the data representation and the computation scheme, so its performances were greatly improved, and it has got practical applications in many countries.

  16. Directly executable formal models of middleware for MANET and Cloud Networking and Computing

    Science.gov (United States)

    Pashchenko, D. V.; Sadeq Jaafar, Mustafa; Zinkin, S. A.; Trokoz, D. A.; Pashchenko, T. U.; Sinev, M. P.

    2016-04-01

    The article considers some “directly executable” formal models that are suitable for the specification of computing and networking in the cloud environment and other networks which are similar to wireless networks MANET. These models can be easily programmed and implemented on computer networks.

  17. Report on Computing and Networking in the Space Science Laboratory by the SSL Computer Committee

    Science.gov (United States)

    Gallagher, D. L. (Editor)

    1993-01-01

    The Space Science Laboratory (SSL) at Marshall Space Flight Center is a multiprogram facility. Scientific research is conducted in four discipline areas: earth science and applications, solar-terrestrial physics, astrophysics, and microgravity science and applications. Representatives from each of these discipline areas participate in a Laboratory computer requirements committee, which developed this document. The purpose is to establish and discuss Laboratory objectives for computing and networking in support of science. The purpose is also to lay the foundation for a collective, multiprogram approach to providing these services. Special recognition is given to the importance of the national and international efforts of our research communities toward the development of interoperable, network-based computer applications.

  18. A new method for computing attention network scores and relationships between attention networks.

    Science.gov (United States)

    Wang, Yi-Feng; Cui, Qian; Liu, Feng; Huo, Ya-Jun; Lu, Feng-Mei; Chen, Heng; Chen, Hua-Fu

    2014-01-01

    The attention network test (ANT) is a reliable tool to detect the efficiency of alerting, orienting, and executive control networks. However, studies using the ANT obtained inconsistent relationships between attention networks due to two reasons: on the one hand, the inter-network relationships of attention subsystems were far from clear; on the other hand, ANT scores in previous studies were disturbed by possible inter-network interactions. Here we proposed a new computing method by dissecting cue-target conditions to estimate ANT scores and relationships between attention networks as pure as possible. The method was tested in 36 participants. Comparing to the original method, the new method showed a larger alerting score and a smaller executive control score, and revealed interactions between alerting and executive control and between orienting and executive control. More interestingly, the new method revealed unidirectional influences from alerting to executive control and from executive control to orienting. These findings provided useful information for better understanding attention networks and their relationships in the ANT. Finally, the relationships of attention networks should be considered with more experimental paradigms and techniques.

  19. Corporate Networks: a Proposal for Virtualization with Cloud Computing

    Directory of Open Access Journals (Sweden)

    Chau Sen Shia

    2016-07-01

    Full Text Available Most Educational Universities encounter difficulties in developing a work of integration when need to do planning: allocation of classes, distribution of rooms, allocation of classes and establish better communication between the coordinators of other units belonging to the same institution (geographically distributed. According to Veras (2010 as a way to react to the increased competition, many companies have sought to use a more flexible organizational format. Currently, the use of business networking alliances has become an option in the search of this flexibiildade. The networks that interconnect, organizations offer support for processes, in response to the new times of this competitiveness. It may be noted that the inter-organizational networks supported by information technology (it, allow organizations to act together as a great value system. According to Fusco and Sacomano (2009, alliances may develop in any supply chain, provided the environment in which they occur, operations tasks, and processes to be developed, the qualities required, and available and the objectives to be developed. The performance of each part is what will make the difference in getting the results of the companies involved in the business. In order to better meet the studies related to behavioral analysis of alliances in enterprise networks with the use of technologies of cloud computing (cloud computing, this project applies the structure of how virtualization assessment tool for analysis of relationships between companies and the strategic alignment in organizations. In this context, this project intends to propose an architecture of integration of technology cloud computing with SOA (Service Oriented Architecture – service-oriented architecture using web services (WEB Services to assist in the execution of strategic business processes within the organizations aimed at universities that have multiple geographically distributed units

  20. Open Problems in Network-aware Data Management in Exa-scale Computing and Terabit Networking Era

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Byna, Surendra

    2011-12-06

    Accessing and managing large amounts of data is a great challenge in collaborative computing environments where resources and users are geographically distributed. Recent advances in network technology led to next-generation high-performance networks, allowing high-bandwidth connectivity. Efficient use of the network infrastructure is necessary in order to address the increasing data and compute requirements of large-scale applications. We discuss several open problems, evaluate emerging trends, and articulate our perspectives in network-aware data management.

  1. Line-plane broadcasting in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-11-23

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  2. Line-plane broadcasting in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Berg, Jeremy E.; Blocksome, Michael A.; Smith, Brian E.

    2010-06-08

    Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network.

  3. Computer-Aided Design of Photocured Polymer Networks

    Science.gov (United States)

    Sarkar, Swarnavo; Lin-Gibson, Sheng; Chiang, Martin

    Light-initiated free radical polymerization is widely used for manufacturing biomaterials, scaffolds for micomolding, and is being developed as a method for fast 3D fabrication. This process has a large set of control parameters in the composition of the photocurable matrix and the photocuring conditions. But a quantitative map between the choice of parameters and the properties of the resultant polymer is currently unavailable. We present a computational approach to simulate the growth of a polymer network using the stochastic differential equations of reactions and diffusion for a photocuring system. This method allows us to sample trajectories of a growing polymer network in silico. Thus, we provide a computational alternative to synthesize and probe a polymer network for properties like the degree of conversion, structure factor, density of states, and viscosity. We present simulation results that agree with the universal features observed in photopolymerization. Our proposed method enables a thorough and systematic search over the entire parameter space to discover interesting combinations for synthesis.

  4. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  5. High-throughput Bayesian Network Learning using Heterogeneous Multicore Computers.

    Science.gov (United States)

    Linderman, Michael D; Athalye, Vivek; Meng, Teresa H; Asadi, Narges Bani; Bruggner, Robert; Nolan, Garry P

    2010-06-01

    Aberrant intracellular signaling plays an important role in many diseases. The causal structure of signal transduction networks can be modeled as Bayesian Networks (BNs), and computationally learned from experimental data. However, learning the structure of Bayesian Networks (BNs) is an NP-hard problem that, even with fast heuristics, is too time consuming for large, clinically important networks (20-50 nodes). In this paper, we present a novel graphics processing unit (GPU)-accelerated implementation of a Monte Carlo Markov Chain-based algorithm for learning BNs that is up to 7.5-fold faster than current general-purpose processor (GPP)-based implementations. The GPU-based implementation is just one of several implementations within the larger application, each optimized for a different input or machine configuration. We describe the methodology we use to build an extensible application, assembled from these variants, that can target a broad range of heterogeneous systems, e.g., GPUs, multicore GPPs. Specifically we show how we use the Merge programming model to efficiently integrate, test and intelligently select among the different potential implementations.

  6. [Forensic evidence-based medicine in computer communication networks].

    Science.gov (United States)

    Qiu, Yun-Liang; Peng, Ming-Qi

    2013-12-01

    As an important component of judicial expertise, forensic science is broad and highly specialized. With development of network technology, increasement of information resources, and improvement of people's legal consciousness, forensic scientists encounter many new problems, and have been required to meet higher evidentiary standards in litigation. In view of this, evidence-based concept should be established in forensic medicine. We should find the most suitable method in forensic science field and other related area to solve specific problems in the evidence-based mode. Evidence-based practice can solve the problems in legal medical field, and it will play a great role in promoting the progress and development of forensic science. This article reviews the basic theory of evidence-based medicine and its effect, way, method, and evaluation in the forensic medicine in order to discuss the application value of forensic evidence-based medicine in computer communication networks.

  7. High-performance computing, high-speed networks, and configurable computing environments: progress toward fully distributed computing.

    Science.gov (United States)

    Johnston, W E; Jacobson, V L; Loken, S C; Robertson, D W; Tierney, B L

    1992-01-01

    The next several years will see the maturing of a collection of technologies that will enable fully and transparently distributed computing environments. Networks will be used to configure independent computing, storage, and I/O elements into "virtual systems" that are optimal for solving a particular problem. This environment will make the most powerful computing systems those that are logically assembled from network-based components and will also make those systems available to a widespread audience. Anticipating that the necessary technology and communications infrastructure will be available in the next 3 to 5 years, we are developing and demonstrating prototype applications that test and exercise the currently available elements of this configurable environment. The Lawrence Berkeley Laboratory (LBL) Information and Computing Sciences and Research Medicine Divisions have collaborated with the Pittsburgh Supercomputer Center to demonstrate one distributed application that illuminates the issues and potential of using networks to configure virtual systems. This application allows the interactive visualization of large three-dimensional (3D) scalar fields (voxel data sets) by using a network-based configuration of heterogeneous supercomputers and workstations. The specific test case is visualization of 3D magnetic resonance imaging (MRI) data. The virtual system architecture consists of a Connection Machine-2 (CM-2) that performs surface reconstruction from the voxel data, a Cray Y-MP that renders the resulting geometric data into an image, and a workstation that provides the display of the image and the user interface for specifying the parameters for the geometry generation and 3D viewing. These three elements are configured into a virtual system by using several different network technologies. This paper reviews the current status of the software, hardware, and communications technologies that are needed to enable this configurable environment. These

  8. Analysis of Network Performance for Computer Communication Systems with Benchmark

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper introduced a performance evaluating approach of computer communication system based on the simulation and measurement technology, and discussed its evaluating models. The result of our experiment showed that the outcome of practical measurement on Ether-LAN fitted in well with the theoreticai analysis. The approach we presented can be used to define various kinds of artificially simulated load models conveniently, build all kinds of network application environments in a flexible way, and exert sufficientiy the widely-used and high-precision features of the traditional simulation technology and the reality,reliability, adaptability features of measurement technology.

  9. Energy-efficient computing and networking. Revised selected papers

    Energy Technology Data Exchange (ETDEWEB)

    Hatziargyriou, Nikos; Dimeas, Aris [Ethnikon Metsovion Polytechneion, Athens (Greece); Weidlich, Anke (eds.) [SAP Research Center, Karlsruhe (Germany); Tomtsi, Thomai

    2011-07-01

    This book constitutes the postproceedings of the First International Conference on Energy-Efficient Computing and Networking, E-Energy, held in Passau, Germany in April 2010. The 23 revised papers presented were carefully reviewed and selected for inclusion in the post-proceedings. The papers are organized in topical sections on energy market and algorithms, ICT technology for the energy market, implementation of smart grid and smart home technology, microgrids and energy management, and energy efficiency through distributed energy management and buildings. (orig.)

  10. Computational studies of gene regulatory networks: in numero molecular biology.

    Science.gov (United States)

    Hasty, J; McMillen, D; Isaacs, F; Collins, J J

    2001-04-01

    Remarkable progress in genomic research is leading to a complete map of the building blocks of biology. Knowledge of this map is, in turn, setting the stage for a fundamental description of cellular function at the DNA level. Such a description will entail an understanding of gene regulation, in which proteins often regulate their own production or that of other proteins in a complex web of interactions. The implications of the underlying logic of genetic networks are difficult to deduce through experimental techniques alone, and successful approaches will probably involve the union of new experiments and computational modelling techniques.

  11. Human -Computer Interface using Gestures based on Neural Network

    Directory of Open Access Journals (Sweden)

    Aarti Malik

    2014-10-01

    Full Text Available - Gestures are powerful tools for non-verbal communication. Human computer interface (HCI is a growing field which reduces the complexity of interaction between human and machine in which gestures are used for conveying information or controlling the machine. In the present paper, static hand gestures are utilized for this purpose. The paper presents a novel technique of recognizing hand gestures i.e. A-Z alphabets, 0-9 numbers and 6 additional control signals (for keyboard and mouse control by extracting various features of hand ,creating a feature vector table and training a neural network. The proposed work has a recognition rate of 99%. .

  12. Spiking DNA Computing with applications to BP Neural Networks Classification

    Directory of Open Access Journals (Sweden)

    Wenke Zang

    2012-08-01

    Full Text Available The study uses the idea of the extreme parallel to solve the BP neural network classification. Modification of the weights is not the traditional method which is to modify the connection weights between neurons repeatedly, but to find a group of weights in all possible weights combinations. The groups of weights are suitable for the relationship of the ideal input and the ideal output. Therefore, the model has some advantages compared with the traditional serial model in time miscellaneous. In the actual DNA computing, we also associate the coding problem with the model design. The coding problem is an important issue worthy to study in the DNA computing. There are many factors affecting the coding. The coding in this study is made when certain factors are overlooked.

  13. A New Stochastic Computing Methodology for Efficient Neural Network Implementation.

    Science.gov (United States)

    Canals, Vincent; Morro, Antoni; Oliver, Antoni; Alomar, Miquel L; Rosselló, Josep L

    2016-03-01

    This paper presents a new methodology for the hardware implementation of neural networks (NNs) based on probabilistic laws. The proposed encoding scheme circumvents the limitations of classical stochastic computing (based on unipolar or bipolar encoding) extending the representation range to any real number using the ratio of two bipolar-encoded pulsed signals. Furthermore, the novel approach presents practically a total noise-immunity capability due to its specific codification. We introduce different designs for building the fundamental blocks needed to implement NNs. The validity of the present approach is demonstrated through a regression and a pattern recognition task. The low cost of the methodology in terms of hardware, along with its capacity to implement complex mathematical functions (such as the hyperbolic tangent), allows its use for building highly reliable systems and parallel computing.

  14. Accurate and Precise Computation Using Analog VLSI, with Applications to Computer Graphics and Neural Networks.

    Science.gov (United States)

    Kirk, David Blair

    This thesis develops an engineering practice and design methodology to enable us to use CMOS analog VLSI chips to perform more accurate and precise computation. These techniques form the basis of an approach that permits us to build computer graphics and neural network applications using analog VLSI. The nature of the design methodology focuses on defining goals for circuit behavior to be met as part of the design process. To increase the accuracy of analog computation, we develop techniques for creating compensated circuit building blocks, where compensation implies the cancellation of device variations, offsets, and nonlinearities. These compensated building blocks can be used as components in larger and more complex circuits, which can then also be compensated. To this end, we develop techniques for automatically determining appropriate parameters for circuits, using constrained optimization. We also fabricate circuits that implement multi-dimensional gradient estimation for a gradient descent optimization technique. The parameter-setting and optimization tools allow us to automatically choose values for compensating our circuit building blocks, based on our goals for the circuit performance. We can also use the techniques to optimize parameters for larger systems, applying the goal-based techniques hierarchically. We also describe a set of thought experiments involving circuit techniques for increasing the precision of analog computation. Our engineering design methodology is a step toward easier use of analog VLSI to solve problems in computer graphics and neural networks. We provide data measured from compensated multipliers built using these design techniques. To demonstrate the feasibility of using analog VLSI for more quantitative computation, we develop small applications using the goal-based design approach and compensated components. Finally, we conclude by discussing the expected significance of this work for the wider use of analog VLSI for

  15. Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.

    Science.gov (United States)

    Qu, Hong; Yi, Zhang; Yang, Simon X

    2013-06-01

    Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach.

  16. Optimization of stochastic discrete systems and control on complex networks computational networks

    CERN Document Server

    Lozovanu, Dmitrii

    2014-01-01

    This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic con...

  17. Computing motion using analog and binary resistive networks

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, J.; Koch, C.; Luo, J.; Mead, C.

    1988-03-01

    To the authors, and other biological organisms, vision seems effortless. The authors open their eyes and they ''see'' the world in all its color, brightness, and movement. Flies, frogs, cats, and humans can all equally well perceive a rapidly changing environment and act on it. Yet, they have great difficulties when trying to endow the machines with similar abilities. In this article, they describe recent developments in the theory of early vision that led from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain ''cost'' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. Thus, they can compute the optical flow by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks, which they implemented in complementary metal oxide semiconductor (CMOS) very large scale integrated (VLSI) circuits, represent plausible candidates for biological vision systems.

  18. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  19. Energy-Latency Tradeoff for In-Network Function Computation in Random Networks

    CERN Document Server

    Balister, Paul; Anandkumar, Animashree; Willsky, Alan

    2011-01-01

    The problem of designing policies for in-network function computation with minimum energy consumption subject to a latency constraint is considered. The scaling behavior of the energy consumption under the latency constraint is analyzed for random networks, where the nodes are uniformly placed in growing regions and the number of nodes goes to infinity. The special case of sum function computation and its delivery to a designated root node is considered first. A policy which achieves order-optimal average energy consumption in random networks subject to the given latency constraint is proposed. The scaling behavior of the optimal energy consumption depends on the path-loss exponent of wireless transmissions and the dimension of the Euclidean region where the nodes are placed. The policy is then extended to computation of a general class of functions which decompose according to maximal cliques of a proximity graph such as the $k$-nearest neighbor graph or the geometric random graph. The modified policy achiev...

  20. Computationally efficient measure of topological redundancy of biological and social networks

    Science.gov (United States)

    Albert, Réka; Dasgupta, Bhaskar; Hegde, Rashmi; Sivanathan, Gowri Sangeetha; Gitter, Anthony; Gürsoy, Gamze; Paul, Pradyut; Sontag, Eduardo

    2011-09-01

    It is well known that biological and social interaction networks have a varying degree of redundancy, though a consensus of the precise cause of this is so far lacking. In this paper, we introduce a topological redundancy measure for labeled directed networks that is formal, computationally efficient, and applicable to a variety of directed networks such as cellular signaling, and metabolic and social interaction networks. We demonstrate the computational efficiency of our measure by computing its value and statistical significance on a number of biological and social networks with up to several thousands of nodes and edges. Our results suggest a number of interesting observations: (1) Social networks are more redundant that their biological counterparts, (2) transcriptional networks are less redundant than signaling networks, (3) the topological redundancy of the C. elegans metabolic network is largely due to its inclusion of currency metabolites, and (4) the redundancy of signaling networks is highly (negatively) correlated with the monotonicity of their dynamics.

  1. A New Computationally Efficient Measure of Topological Redundancy of Biological and Social Networks

    CERN Document Server

    Albert, Reka; Gitter, Anthony; Gursoy, Gamze; Hegde, Rashmi; Paul, Pradyut; Sivanathan, Gowri Sangeetha; Sontag, Eduardo

    2011-01-01

    It is well-known that biological and social interaction networks have a varying degree of redundancy, though a consensus of the precise cause of this is so far lacking. In this paper, we introduce a topological redundancy measure for labeled directed networks that is formal, computationally efficient and applicable to a variety of directed networks such as cellular signaling, metabolic and social interaction networks. We demonstrate the computational efficiency of our measure by computing its value and statistical significance on a number of biological and social networks with up to several thousands of nodes and edges. Our results suggest a number of interesting observations: (1) social networks are more redundant that their biological counterparts, (2) transcriptional networks are less redundant than signaling networks, (3) the topological redundancy of the C. elegans metabolic network is largely due to its inclusion of currency metabolites, and (4) the redundancy of signaling networks is highly (negatively...

  2. 10 CFR 73.54 - Protection of digital computer and communication systems and networks.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Protection of digital computer and communication systems... computer and communication systems and networks. By November 23, 2009 each licensee currently licensed to... provide high assurance that digital computer and communication systems and networks are...

  3. Defense Data Network/TOPS-20 Tutorial. An Interative Computer Program.

    Science.gov (United States)

    1985-12-01

    GRUP SB-GOUP Defense Data Network, DDN, TOPS-20, computer FIEL GRUP SB-GOUP networking. 19 ABSTRACT (Continue on roverse if necessary and identify by...switching network dedicated to meeting the data communica- tion requirements of the DoD. The network is subdivided into " two functional areas: (1) the

  4. A personal computer network system for equitable allocation of cadaver organs.

    Science.gov (United States)

    Shimada, M; Akazawa, K; Moriguchi, S; Odaka, T; Nose, Y

    1991-01-01

    We developed a personal computer network system for the equitable allocation of cadaveric organs. This network consists of a host computer (IBM PS55 model 5570 T) and various kinds of personal computers manufactured by many different computer makers in Japan. The merits of our personal computer network include lower cost and an easy access to the host computer from all the centres participating in this network while using their own favourite personal computers. Among the programs made for allocating cadaveric organs, we present in this paper the program for livers. This program was developed with a modified version of the logic developed by Starzl et al. The grade modification for the United Network for Organ Sharing (UNOS) in the United States was used as the basis for classification of medical urgency. Our program weighed the factors of medical urgency, compatibility of blood group and waiting time. Distance factors were omitted because of the smaller area of the network compared to that of UNOS. This computer network would be linked to other computer networks in creating a national organ procurement and transplant network in Japan, in order to help them to catch up with other advanced transplant countries. Such an equal and objective computer system should allow organ transplantation to become more widely accepted.

  5. Analysis of attitudes toward computer networks and Internet addiction of Taiwanese adolescents.

    Science.gov (United States)

    Tsai, C C; Lin, S S

    2001-06-01

    This study explored the interplay between young people's attitudes toward computer networks and Internet addiction. Ninety possible Internet addicts were selected for examination after analyzing the questionnaire responses of an initial sample of 753 Taiwanese high school adolescents. It was found that the subjects' attitudes toward computer networks could explain many aspects of Internet addiction. However, actual behaviors on Internet usage and perceptions on the usefulness of Internet were more important than affective responses toward computer networks in predicting adolescents' Internet addiction.

  6. Computational models of signalling networks for non-linear control.

    Science.gov (United States)

    Fuente, Luis A; Lones, Michael A; Turner, Alexander P; Stepney, Susan; Caves, Leo S; Tyrrell, Andy M

    2013-05-01

    Artificial signalling networks (ASNs) are a computational approach inspired by the signalling processes inside cells that decode outside environmental information. Using evolutionary algorithms to induce complex behaviours, we show how chaotic dynamics in a conservative dynamical system can be controlled. Such dynamics are of particular interest as they mimic the inherent complexity of non-linear physical systems in the real world. Considering the main biological interpretations of cellular signalling, in which complex behaviours and robust cellular responses emerge from the interaction of multiple pathways, we introduce two ASN representations: a stand-alone ASN and a coupled ASN. In particular we note how sophisticated cellular communication mechanisms can lead to effective controllers, where complicated problems can be divided into smaller and independent tasks.

  7. Computational Genetic Regulatory Networks Evolvable, Self-organizing Systems

    CERN Document Server

    Knabe, Johannes F

    2013-01-01

    Genetic Regulatory Networks (GRNs) in biological organisms are primary engines for cells to enact their engagements with environments, via incessant, continually active coupling. In differentiated multicellular organisms, tremendous complexity has arisen in the course of evolution of life on earth. Engineering and science have so far achieved no working system that can compare with this complexity, depth and scope of organization. Abstracting the dynamics of genetic regulatory control to a computational framework in which artificial GRNs in artificial simulated cells differentiate while connected in a changing topology, it is possible to apply Darwinian evolution in silico to study the capacity of such developmental/differentiated GRNs to evolve. In this volume an evolutionary GRN paradigm is investigated for its evolvability and robustness in models of biological clocks, in simple differentiated multicellularity, and in evolving artificial developing 'organisms' which grow and express an ontogeny starting fr...

  8. Comparison of Lauritzen-Spiegelhalter and successive restrictions algorithms for computing probability distributions in Bayesian networks

    Science.gov (United States)

    Smail, Linda

    2016-06-01

    The basic task of any probabilistic inference system in Bayesian networks is computing the posterior probability distribution for a subset or subsets of random variables, given values or evidence for some other variables from the same Bayesian network. Many methods and algorithms have been developed to exact and approximate inference in Bayesian networks. This work compares two exact inference methods in Bayesian networks-Lauritzen-Spiegelhalter and the successive restrictions algorithm-from the perspective of computational efficiency. The two methods were applied for comparison to a Chest Clinic Bayesian Network. Results indicate that the successive restrictions algorithm shows more computational efficiency than the Lauritzen-Spiegelhalter algorithm.

  9. Computer-generated global map of valley networks on Mars

    Science.gov (United States)

    Luo, Wei; Stepinski, T. F.

    2009-11-01

    The presence of valley networks (VN) on Mars suggests that early Mars was warmer and wetter than present. However, detailed geomorphic analyses of individual networks have not led to a consensus regarding their origin. An additional line of evidence can be provided by the global pattern of dissection on Mars, but the currently available global map of VN, compiled from Viking images, is incomplete and outdated. We created an updated map of VN by using a computer algorithm that parses topographic data and recognizes valleys by their morphologic signature. This computer-generated map was visually inspected and edited to produce the final updated map of VN. The new map shows an increase in total VN length by a factor of 2.3. A global map of dissection density, D, derived from the new VN map, shows that the most highly dissected region forms a belt located between the equator and mid-southern latitudes. The most prominent regions of high values of D are the northern Terra Cimmeria and the Margaritifer Terra where D reaches the value of 0.12 km-1 over extended areas. The average value of D is 0.062 km-1, only 2.6 times lower than the terrestrial value of D as measured in the same fashion. These relatively high values of dissection density over extensive regions of the planet point toward precipitation-fed runoff erosion as the primary mechanism of valley formation. Assuming a warm and wet early Mars, peculiarity of the global pattern of dissection is interpreted in the terms of climate controlling factors influenced by the topographic dichotomy.

  10. Storage Area Network Implementation on an Educational Institute Network Computer Networking and Communication

    CERN Document Server

    Osama, Safarini

    2011-01-01

    the storage infrastructure is the foundation on which information relies and therefore must support a company's business objectives and business model. In this environment, simply deploying more and faster storage devices is not enough; a new kind of infrastructure is needed, one that provides more enhanced network availability, data accessibility, and system manageability than is provided by today's infrastructure. The SAN meets this challenge. The SAN liberates the storage device, so it is not on a particular server bus, and attaches it directly to the network. In other words, storage is externalized and functionally distributed across the organization. The SAN also enables the centralizing of storage devices and the clustering of servers, which makes for easier and less expensive administration. So the idea is to create an intelligent SAN infrastructure that stretches to meet increased demands, allows highly available and heterogeneous access to expanding information.

  11. WaveJava: Wavelet-based network computing

    Science.gov (United States)

    Ma, Kun; Jiao, Licheng; Shi, Zhuoer

    1997-04-01

    Wavelet is a powerful theory, but its successful application still needs suitable programming tools. Java is a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multi- threaded, dynamic language. This paper addresses the design and development of a cross-platform software environment for experimenting and applying wavelet theory. WaveJava, a wavelet class library designed by the object-orient programming, is developed to take advantage of the wavelets features, such as multi-resolution analysis and parallel processing in the networking computing. A new application architecture is designed for the net-wide distributed client-server environment. The data are transmitted with multi-resolution packets. At the distributed sites around the net, these data packets are done the matching or recognition processing in parallel. The results are fed back to determine the next operation. So, the more robust results can be arrived quickly. The WaveJava is easy to use and expand for special application. This paper gives a solution for the distributed fingerprint information processing system. It also fits for some other net-base multimedia information processing, such as network library, remote teaching and filmless picture archiving and communications.

  12. Computing Posterior Probabilities of Structural Features in Bayesian Networks

    CERN Document Server

    Tian, Jin

    2012-01-01

    We study the problem of learning Bayesian network structures from data. Koivisto and Sood (2004) and Koivisto (2006) presented algorithms that can compute the exact marginal posterior probability of a subnetwork, e.g., a single edge, in O(n2n) time and the posterior probabilities for all n(n-1) potential edges in O(n2n) total time, assuming that the number of parents per node or the indegree is bounded by a constant. One main drawback of their algorithms is the requirement of a special structure prior that is non uniform and does not respect Markov equivalence. In this paper, we develop an algorithm that can compute the exact posterior probability of a subnetwork in O(3n) time and the posterior probabilities for all n(n-1) potential edges in O(n3n) total time. Our algorithm also assumes a bounded indegree but allows general structure priors. We demonstrate the applicability of the algorithm on several data sets with up to 20 variables.

  13. Artificial neural networks: fundamentals, computing, design, and application.

    Science.gov (United States)

    Basheer, I A; Hajmeer, M

    2000-12-01

    Artificial neural networks (ANNs) are relatively new computational tools that have found extensive utilization in solving many complex real-world problems. The attractiveness of ANNs comes from their remarkable information processing characteristics pertinent mainly to nonlinearity, high parallelism, fault and noise tolerance, and learning and generalization capabilities. This paper aims to familiarize the reader with ANN-based computing (neurocomputing) and to serve as a useful companion practical guide and toolkit for the ANNs modeler along the course of ANN project development. The history of the evolution of neurocomputing and its relation to the field of neurobiology is briefly discussed. ANNs are compared to both expert systems and statistical regression and their advantages and limitations are outlined. A bird's eye review of the various types of ANNs and the related learning rules is presented, with special emphasis on backpropagation (BP) ANNs theory and design. A generalized methodology for developing successful ANNs projects from conceptualization, to design, to implementation, is described. The most common problems that BPANNs developers face during training are summarized in conjunction with possible causes and remedies. Finally, as a practical application, BPANNs were used to model the microbial growth curves of S. flexneri. The developed model was reasonably accurate in simulating both training and test time-dependent growth curves as affected by temperature and pH.

  14. Analytical Investigation on Computer Network Security System of Colleges and Universities

    Institute of Scientific and Technical Information of China (English)

    徐悦

    2013-01-01

    With the development of network technology, computer systems of colleges and universities gradually use network management and services, which provides comprehensive and convenient information access and management conditions. How?ever, in the network environment, the security of the system faces security threats like virus, malicious software and human at?tack, which may make the network data of the computer system damaged and tampered, or even lead to network system paraly?sis, breakdown of system concerning management and payment, missing and stealing of confidential documents. Therefore, it is of important application significance to promote the security of computer network systems of colleges and universities. This paper conducts comprehensive analysis on the security system of computer network systems of colleges and universities, elaborates its R&D and application status and puts forward specific schemes of prevention and solutions, which provides suggestions and refer?ence for its construction.

  15. Distribution Network Fault Diagnosis Method Based on Granular Computing-BP

    Directory of Open Access Journals (Sweden)

    CHEN Zhong-xiao

    2013-01-01

    Full Text Available To deal with the complexity and uncertainty of distribution network fault information, a method of fault diagnosis based on granular computing and BP is proposed. This method uses attribute reduction advantages of granular computing theory and self-learning and knowledge acquisition ability of BP neural network. It put granular computing theory as the front-end processor of the BP neural network, namely simplify primitive information making use of granular computing reduction, and according to the concepts of relative granularity and significance of attributes based on binary granular computing are proposed to select input of BP, thereby reducing solving scale, and then construct neural network based on the minimum attribute sets, using BP neural network to model and parameter identify, reduce the BP study training time, improve the accuracy of the fault diagnosis. The distribution network example verifies the rationality and effectiveness of the proposed method.

  16. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Science.gov (United States)

    Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D

    2013-04-16

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  17. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas E [ORNL; Schuman, Catherine D [ORNL; Young, Steven R [ORNL; Patton, Robert M [ORNL; Spedalieri, Federico [University of Southern California, Information Sciences Institute; Liu, Jeremy [University of Southern California, Information Sciences Institute; Yao, Ke-Thia [University of Southern California, Information Sciences Institute; Rose, Garrett [University of Tennessee (UT); Chakma, Gangotree [University of Tennessee (UT)

    2016-01-01

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.

  18. Locating hardware faults in a data communications network of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-01-12

    Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

  19. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    Science.gov (United States)

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  20. Energy Research and Development Administration Ad Hoc Computer Networking Group: experimental program

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, I.

    1975-03-19

    The Ad Hoc Computer Networking Group was established to investigate the potential advantages and costs of newer forms of remote resource sharing and computer networking. The areas of research and investigation that are within the scope of the ERDA CNG are described. (GHT)

  1. PCs for Families: A Study of Early Intervention Using Networked Computing in Education.

    Science.gov (United States)

    Reaux, Ray A.; Ehrich, Roger W.; McCreary, Faith; Rowland, Keith; Hood, Susan

    1998-01-01

    Discusses the PCs for Families experiment, a longitudinal quantitative and ethnographic study of networked computing in the fifth-grade classroom examining how networked computing affects students' educational achievements, attitude and professional development of teachers and support instructors, and how families support students and react to the…

  2. The Effectiveness of Using Virtual Laboratories to Teach Computer Networking Skills in Zambia

    Science.gov (United States)

    Lampi, Evans

    2013-01-01

    The effectiveness of using virtual labs to train students in computer networking skills, when real equipment is limited or unavailable, is uncertain. The purpose of this study was to determine the effectiveness of using virtual labs to train students in the acquisition of computer network configuration and troubleshooting skills. The study was…

  3. Network Intelligence Based on Network State Information for Connected Vehicles Utilizing Fog Computing

    Directory of Open Access Journals (Sweden)

    Seongjin Park

    2017-01-01

    Full Text Available This paper proposes a method to take advantage of fog computing and SDN in the connected vehicle environment, where communication channels are unstable and the topology changes frequently. A controller knows the current state of the network by maintaining the most recent network topology. Of all the information collected by the controller in the mobile environment, node mobility information is particularly important. Thus, we divide nodes into three classes according to their mobility types and use their related attributes to efficiently manage the mobile connections. Our approach utilizes mobility information to reduce control message overhead by adjusting the period of beacon messages and to support efficient failure recovery. One is to recover the connection failures using only mobility information, and the other is to suggest a real-time scheduling algorithm to recover the services for the vehicles that lost connection in the case of a fog server failure. A real-time scheduling method is first described and then evaluated. The results show that our scheme is effective in the connected vehicle environment. We then demonstrate the reduction of control overhead and the connection recovery by using a network simulator. The simulation results show that control message overhead and failure recovery time are decreased by approximately 55% and 5%, respectively.

  4. Compute-and-forward: multiple bi-directional sessions on the line network

    NARCIS (Netherlands)

    Ren, Zhijie; Goseling, Jasper; Weber, Jos H.; Gastpar, Michael

    2013-01-01

    Signal superposition and broadcast are important features of the wireless medium. Compute-and-Forward, also known as Physical Layer Network Coding, is a technique exploiting these features in order to improve performance of wireless networks. In this paper, the possible benefits for the line network

  5. Finding Multi-step Attacks in Computer Networks using Heuristic Search and Mobile Ambients

    NARCIS (Netherlands)

    Nunes Leal Franqueira, V.

    2009-01-01

    An important aspect of IT security governance is the proactive and continuous identification of possible attacks in computer networks. This is complicated due to the complexity and size of networks, and due to the fact that usually network attacks are performed in several steps. This thesis proposes

  6. Finding multi-step attacks in computer networks using heuristic search and mobile ambients

    NARCIS (Netherlands)

    Franqueira, Virginia Nunes Leal

    2009-01-01

    An important aspect of IT security governance is the proactive and continuous identification of possible attacks in computer networks. This is complicated due to the complexity and size of networks, and due to the fact that usually network attacks are performed in several steps. This thesis proposes

  7. Applying Intelligent Computing Techniques to Modeling Biological Networks from Expression Data

    Institute of Scientific and Technical Information of China (English)

    Wei-Po Lee; Kung-Cheng Yang

    2008-01-01

    Constructing biological networks is one of the most important issues in system sbiology. However, constructing a network from data manually takes a considerable large amount of time, therefore an automated procedure is advocated. To automate the procedure of network construction, in this work we use two intelligent computing techniques, genetic programming and neural computation, to infer two kinds of network models that use continuous variables. To verify the presented approaches, experiments have been conducted and the preliminary results show that both approaches can be used to infer networks successfully.

  8. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  9. The Effectiveness of Using Virtual Laboratories to Teach Computer Networking Skills  in Zambia

    OpenAIRE

    Lampi, Evans

    2013-01-01

    The effectiveness of using virtual labs to train students in computer networking skills, when real equipment is limited or unavailable, is uncertain. The purpose of this study was to determine the effectiveness of using virtual labs to train students in the acquisition of computer network configuration and troubleshooting skills. The study was conducted in the developing country of Zambia, where there is an acute shortage of network lab equipment. Effectiveness was determined by the transfer ...

  10. Hardware Neural Networks Modeling for Computing Different Performance Parameters of Rectangular, Circular, and Triangular Microstrip Antennas

    OpenAIRE

    2014-01-01

    In the last one decade, neural networks-based modeling has been used for computing different performance parameters of microstrip antennas because of learning and generalization features. Most of the created neural models are based on software simulation. As the neural networks show massive parallelism inherently, a parallel hardware needs to be created for creating faster computing machine by taking the advantages of the parallelism of the neural networks. This paper demonstrates a generaliz...

  11. Novel photonic bandgap based architectures for quantum computers and networks

    Science.gov (United States)

    Guney, Durdu

    All of the approaches for quantum information processing have their own advantages, but unfortunately also their own drawbacks. Ideally, one would merge the most attractive features of those different approaches in a single technology. We envision that large-scale photonic crystal (PC) integrated circuits and fibers could be the basis for robust and compact quantum circuits and processors of the next generation quantum computers and networking devices. Cavity QED, solid-state, and (non)linear optical models for computing, and optical fiber approach for communications are the most promising candidates to be improved through this novel technology. In our work, we consider both digital and analog quantum computing. In the digital domain, we first perform gate-level analysis. To achieve this task, we solve the Jaynes-Cummings Hamiltonian with time-dependent coupling parameters under the dipole and rotating-wave approximations for a 3D PC single-mode cavity with a sufficiently high Q-factor. We then exploit the results to show how to create a maximally entangled state of two atoms and how to implement several quantum logic gates: a dual-rail Hadamard gate, a dual-rail NOT gate, and a SWAP gate. In all of these operations, we synchronize atoms, as opposed to previous studies with PCs. The method has the potential for extension to N-atom entanglement, universal quantum logic operations, and the implementation of other useful, cavity QED-based quantum information processing tasks. In the next part of the digital domain, we study circuit-level implementations. We design and simulate an integrated teleportation and readout circuit on a single PC chip. The readout part of our device can not only be used on its own but can also be integrated with other compatible optical circuits to achieve atomic state detection. Further improvement of the device in terms of compactness and robustness is possible by integrating with sources and detectors in the optical regime. In the analog

  12. Network Entropy Based on Topology Configuration and Its Computation to Random Networks

    Institute of Scientific and Technical Information of China (English)

    LI Ji; WANG Bing-Hong; WANG Wen-Xu; ZHOU Tao

    2008-01-01

    A definition of network entropy is presented, and as an example, the relationship between the value of network entropy of ER network model and the connect probability p as well as the total nodes N is discussed. The theoretical result and the simulation result based on the network entropy of the ER network are in agreement well with each other. The result indicated that different from the other network entropy reported before, the network entropy defined here has an obvious difference from different type of random networks or networks having different total nodes. Thus, this network entropy may portray the characters of complex networks better. It is also pointed out that, with the aid of network entropy defined, the concept of equilibrium networks and the concept of non-equilibrium networks may be introduced, and a quantitative measurement to describe the deviation to equilibrium state of a complex network is carried out.

  13. Integrating Network Awareness in ATLAS Distributed Computing Using the ANSE Project

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Petrosyan, Artem; Batista, Jorge Horacio; Mc Kee, Shawn Patrick

    2015-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networking hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networks using perfSONAR combined with data flow performance metrics further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management, building upon ...

  14. Connect the Dot: Computing Feed-links for Network Extension

    NARCIS (Netherlands)

    Aronov, Boris; Buchin, Kevin; Buchin, Maike; Jansen, Bart; Jong, Tom de; Kreveld, Marc van; Löffler, Maarten; Luo, Jun; Silveira, Rodrigo I.; Speckmann, Bettina

    2011-01-01

    Road network analysis can require distance from points that are not on the network themselves. We study the algorithmic problem of connecting a point inside a face (region) of the road network to its boundary while minimizing the detour factor of that point to any point on the boundary of the face.

  15. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    OpenAIRE

    Lukas Falat; Dusan Marcek; Maria Durisova

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the sug...

  16. Partial order approach to compute shortest paths in multimodal networks

    CERN Document Server

    Ensor, Andrew

    2011-01-01

    Many networked systems involve multiple modes of transport. Such systems are called multimodal, and examples include logistic networks, biomedical phenomena, manufacturing process and telecommunication networks. Existing techniques for determining optimal paths in multimodal networks have either required heuristics or else application-specific constraints to obtain tractable problems, removing the multimodal traits of the network during analysis. In this paper weighted coloured--edge graphs are introduced to model multimodal networks, where colours represent the modes of transportation. Optimal paths are selected using a partial order that compares the weights in each colour, resulting in a Pareto optimal set of shortest paths. This approach is shown to be tractable through experimental analyses for random and real multimodal networks without the need to apply heuristics or constraints.

  17. Computing of network tenacity based on modified binary particle swarm optimization algorithm

    Science.gov (United States)

    Shen, Maoxing; Sun, Chengyu

    2017-05-01

    For rapid calculation of network node tenacity, which can depict the invulnerability performance of network, this paper designs a computational method based on modified binary particle swarm optimization (BPSO) algorithm. Firstly, to improve the astringency of the BPSO algorithm, the algorithm adopted an improved bit transfer probability function and location updating formula. Secondly, algorithm for fitness function value of BPSO based on the breadth-first search is designed. Thirdly, the computing method for network tenacity based on the modified BPSO algorithm is presented. Results of experiment conducted in the Advanced Research Project Agency (ARPA) network and Tactical Support Communication (TCS) network illustrate that the computing method is impactful and high-performance to calculate network tenacity.

  18. Syntactic computations in the language network: Characterising dynamic network properties using representational similarity analysis

    Directory of Open Access Journals (Sweden)

    Lorraine Komisarjevsky Tyler

    2013-05-01

    Full Text Available The core human capacity of syntactic analysis involves a left hemisphere network involving left inferior frontal gyrus (LIFG and posterior middle temporal gyrus (LMTG and the anatomical connections between them. Here we use MEG to determine the spatio-temporal properties of syntactic computations in this network. Listeners heard spoken sentences containing a local syntactic ambiguity (e.g. …landing planes…, at the offset of which they heard a disambiguating verb and decided whether it was an acceptable/unacceptable continuation of the sentence. We charted the time-course of processing and resolving syntactic ambiguity by measuring MEG responses from the onset of each word in the ambiguous phrase and the disambiguating word. We used representational similarity analysis (RSA to characterize syntactic information represented in the LIFG and LpMTG over time and to investigate their relationship to each other. Testing a variety of lexico-syntactic and ambiguity models against the MEG data, our results suggest early lexico-syntactic responses in the LpMTG and later effects of ambiguity in the LIFG, pointing to a clear differentiation in the functional roles of these two regions. Our results suggest the LpMTG represents and transmits lexical information to the LIFG, which responds to and resolves the ambiguity.

  19. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    Science.gov (United States)

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  20. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    Science.gov (United States)

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  1. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing.

    Science.gov (United States)

    Aleksin, Sergey G; Zheng, Kaiyu; Rusakov, Dmitri A; Savtchenko, Leonid P

    2017-03-31

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT).

  2. Reviews of computing technology: Securing network applications, Kerberos and RSA

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, S.M.

    1992-06-01

    This paper will focus on the first step in establishing network security, authentication, and describe the basic function of both RSA and Kerberos as used to provide authentication and confidential data transfer services. It will also discuss the Digital Signature Standard and the market acceptance of each. Proper identification of the principals involved in a network dialog is a necessary first step in providing network-wide security comparable to that of stand-alone systems.

  3. Condor-COPASI: high-throughput computing for biochemical networks

    OpenAIRE

    Kent Edward; Hoops Stefan; Mendes Pedro

    2012-01-01

    Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary experti...

  4. Fair Secure Computation with Reputation Assumptions in the Mobile Social Networks

    Directory of Open Access Journals (Sweden)

    Yilei Wang

    2015-01-01

    Full Text Available With the rapid development of mobile devices and wireless technologies, mobile social networks become increasingly available. People can implement many applications on the basis of mobile social networks. Secure computation, like exchanging information and file sharing, is one of such applications. Fairness in secure computation, which means that either all parties implement the application or none of them does, is deemed as an impossible task in traditional secure computation without mobile social networks. Here we regard the applications in mobile social networks as specific functions and stress on the achievement of fairness on these functions within mobile social networks in the presence of two rational parties. Rational parties value their utilities when they participate in secure computation protocol in mobile social networks. Therefore, we introduce reputation derived from mobile social networks into the utility definition such that rational parties have incentives to implement the applications for a higher utility. To the best of our knowledge, the protocol is the first fair secure computation in mobile social networks. Furthermore, it finishes within constant rounds and allows both parties to know the terminal round.

  5. Cloud computing and its applications in the world of networking

    Directory of Open Access Journals (Sweden)

    Puja Dhar

    2012-01-01

    Full Text Available The paper discusses the most discussed topic nowadays cloud computing. Cloud computing is becoming a buzzword. The paper explains the concept, Services provided by cloud computing and different service providers. Also it works out how this technology can be harnessed to bring benefits to the challenges of the business, in terms of cost reduction and maintain competitiveness.

  6. Experimental realization of an entanglement access network and secure multi-party computation

    Science.gov (United States)

    Chang, X.-Y.; Deng, D.-L.; Yuan, X.-X.; Hou, P.-Y.; Huang, Y.-Y.; Duan, L.-M.

    2016-07-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography.

  7. Experimental realization of secure multi-party computation in an entanglement access to network

    CERN Document Server

    Chang, X Y; Yuan, X X; Hou, P Y; Huang, Y Y; Duan, L M

    2015-01-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography.

  8. Experimental realization of an entanglement access network and secure multi-party computation

    Science.gov (United States)

    Chang, X.-Y.; Deng, D.-L.; Yuan, X.-X.; Hou, P.-Y.; Huang, Y.-Y.; Duan, L.-M.

    2016-01-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography. PMID:27404561

  9. Hardware Neural Networks Modeling for Computing Different Performance Parameters of Rectangular, Circular, and Triangular Microstrip Antennas

    Directory of Open Access Journals (Sweden)

    Taimoor Khan

    2014-01-01

    Full Text Available In the last one decade, neural networks-based modeling has been used for computing different performance parameters of microstrip antennas because of learning and generalization features. Most of the created neural models are based on software simulation. As the neural networks show massive parallelism inherently, a parallel hardware needs to be created for creating faster computing machine by taking the advantages of the parallelism of the neural networks. This paper demonstrates a generalized neural networks model created on field programmable gate array- (FPGA- based reconfigurable hardware platform for computing different performance parameters of microstrip antennas. Thus, the proposed approach provides a platform for developing low-cost neural network-based FPGA simulators for microwave applications. Also, the results obtained by this approach are in very good agreement with the measured results available in the literature.

  10. Main control computer security model of closed network systems protection against cyber attacks

    Science.gov (United States)

    Seymen, Bilal

    2014-06-01

    The model that brings the data input/output under control in closed network systems, that maintains the system securely, and that controls the flow of information through the Main Control Computer which also brings the network traffic under control against cyber-attacks. The network, which can be controlled single-handedly thanks to the system designed to enable the network users to make data entry into the system or to extract data from the system securely, intends to minimize the security gaps. Moreover, data input/output record can be kept by means of the user account assigned for each user, and it is also possible to carry out retroactive tracking, if requested. Because the measures that need to be taken for each computer on the network regarding cyber security, do require high cost; it has been intended to provide a cost-effective working environment with this model, only if the Main Control Computer has the updated hardware.

  11. Biological modelling of a computational spiking neural network with neuronal avalanches

    Science.gov (United States)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  12. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    Science.gov (United States)

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  13. Assessing the efficiency of information protection systems in the computer systems and networks

    OpenAIRE

    Nachev, Atanas; Zhelezov, Stanimir

    2015-01-01

    The specific features of the information protection systems in the computer systems and networks require the development of non-trivial methods for their analysis and assessment. Attempts for solutions in this area are given in this paper.

  14. Energy-Efficient Caching for Mobile Edge Computing in 5G Networks

    National Research Council Canada - National Science Library

    Zhaohui Luo; Minghui LiWang; Zhijian Lin; Lianfen Huang; Xiaojiang Du; Mohsen Guizani

    2017-01-01

    Mobile Edge Computing (MEC), which is considered a promising and emerging paradigm to provide caching capabilities in proximity to mobile devices in 5G networks, enables fast, popular content delivery of delay-sensitive...

  15. Computational Data Modeling for Network-Constrained Moving Objects

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Speicys, L.; Kligys, A.

    2003-01-01

    Advances in wireless communications, positioning technology, and other hardware technologies combine to enable a range of applications that use a mobile user’s geo-spatial data to deliver online, location-enhanced services, often referred to as location-based services. Assuming that the service...... users are constrained to a transportation network, this paper develops data structures that model road networks, the mobile users, and stationary objects of interest. The proposed framework encompasses two supplementary road network representations, namely a two-dimensional representation and a graph...

  16. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y W [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Zhang, L F [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Huang, J P [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China)

    2007-07-20

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property.

  17. Computationally efficient locally-recurrent neural networks for online signal processing

    CERN Document Server

    Hussain, A; Shim, I

    1999-01-01

    A general class of computationally efficient locally recurrent networks (CERN) is described for real-time adaptive signal processing. The structure of the CERN is based on linear-in-the- parameters single-hidden-layered feedforward neural networks such as the radial basis function (RBF) network, the Volterra neural network (VNN) and the functionally expanded neural network (FENN), adapted to employ local output feedback. The corresponding learning algorithms are derived and key structural and computational complexity comparisons are made between the CERN and conventional recurrent neural networks. Two case studies are performed involving the real- time adaptive nonlinear prediction of real-world chaotic, highly non- stationary laser time series and an actual speech signal, which show that a recurrent FENN based adaptive CERN predictor can significantly outperform the corresponding feedforward FENN and conventionally employed linear adaptive filtering models. (13 refs).

  18. Towards a Queueing-Based Framework for In-Network Function Computation

    CERN Document Server

    Banerjee, Siddhartha; Shakkottai, Sanjay

    2011-01-01

    We seek to develop network algorithms for function computation in sensor networks. Specifically, we want dynamic joint aggregation, routing, and scheduling algorithms that have analytically provable performance benefits due to in-network computation as compared to simple data forwarding. To this end, we define a class of functions, the Fully-Multiplexible functions, which includes several functions such as parity, MAX, and k th -order statistics. For such functions we exactly characterize the maximum achievable refresh rate of the network in terms of an underlying graph primitive, the min-mincut. In wireline networks, we show that the maximum refresh rate is achievable by a simple algorithm that is dynamic, distributed, and only dependent on local information. In the case of wireless networks, we provide a MaxWeight-like algorithm with dynamic flow splitting, which is shown to be throughput-optimal.

  19. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications

    OpenAIRE

    2016-01-01

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the...

  20. Investigate the Computer Information Network Security Technology and the Development Direction

    OpenAIRE

    Ping Teng

    2017-01-01

    After China’s accession to the WTO, the computer information network security technology of our country has a rapid development, bring many conveniences for the people’s life and work, indirectly changing their daily life and working mode. For the whole development situation of our country, the development direction of informatization is the inevitable trend of development in our country, while the using process of computer information network security technology in the society still exist ma...

  1. Design and Zmplementation of Online Experimental Platform for Computer Networks Course

    Institute of Scientific and Technical Information of China (English)

    WANG Ben; ZHANG Tao

    2012-01-01

    Practice training is very important for students learning Computer networks. But building a real laboratory is constrained and expensive. In this paper, we present an online experimental platform for computer networks course based on Dynamips simulator. Instructors and students can access the platform by IE Browser to manage and take router experiments. On the basis of deployment and testing, the platform is effective and flexible.

  2. Requirements analysis and design for implementation of a satellite link for a local area computer network

    OpenAIRE

    Lorentzen, Richard B.

    1991-01-01

    Approved for public release; distribution is unlimited The purpose of this thesis is to provide naval computer students with a basic knowledge on Very Small Aperture Terminal (VSAT) satellite technology and to define the hardware and software requirements at the interface between a VSAT and a Local Area Network (LAN). By restricting a computer network to terrestrial links, a vast amount of knowledge is not accessed because either the terrestrial links can't access the information or the...

  3. From devil to angel, transmission lines boost parallel computing of linear resistor networks

    CERN Document Server

    Wei, Fei

    2009-01-01

    Transmission line is always big trouble for integrated circuits designers; however, it could be of great help to the parallel computing of extremely large linear resistor networks. In this paper, we introduce the virtual transmission method (VTM), which brings virtual transmission lines into linear resistor networks to achieve distributed and asynchronous parallel computing in the virtual time domain. Numerical experiments show that VTM could be efficiently running on the 2D or 3D microprocessor with arbitrary number of cores.

  4. The design and calibration of a simulation model of a star computer network

    CERN Document Server

    Gomaa, H

    1982-01-01

    A simulation model of the CERN(European Organization for Nuclear Research) SPS star computer network is described. The model concentrates on simulating the message handling computer, through which all messages in the network pass. The paper describes the main features of the model, the transfer time parameters in the model and how performance measurements were used to assist in the calibration of the model.

  5. Defining and Computing Alternative Routes in Road Networks

    CERN Document Server

    Dees, Jonathan; Sanders, Peter; Bader, Roland

    2010-01-01

    Every human likes choices. But today's fast route planning algorithms usually compute just a single route between source and target. There are beginnings to compute alternative routes, but this topic has not been studied thoroughly. Often, the aspect of meaningful alternative routes is neglected from a human point of view. We fill in this gap by suggesting mathematical definitions for such routes. As a second contribution we propose heuristics to compute them, as this is NP-hard in general.

  6. Using new edges for anomaly detection in computer networks

    Energy Technology Data Exchange (ETDEWEB)

    Neil, Joshua Charles

    2017-07-04

    Creation of new edges in a network may be used as an indication of a potential attack on the network. Historical data of a frequency with which nodes in a network create and receive new edges may be analyzed. Baseline models of behavior among the edges in the network may be established based on the analysis of the historical data. A new edge that deviates from a respective baseline model by more than a predetermined threshold during a time window may be detected. The new edge may be flagged as potentially anomalous when the deviation from the respective baseline model is detected. Probabilities for both new and existing edges may be obtained for all edges in a path or other subgraph. The probabilities may then be combined to obtain a score for the path or other subgraph. A threshold may be obtained by calculating an empirical distribution of the scores under historical conditions.

  7. Computing autocatalytic sets to unravel inconsistencies in metabolic network reconstructions

    DEFF Research Database (Denmark)

    Schmidt, R.; Waschina, S.; Boettger-Schmidt, D.

    2015-01-01

    by inherent inconsistencies and gaps. RESULTS: Here we present a novel method to validate metabolic network reconstructions based on the concept of autocatalytic sets. Autocatalytic sets correspond to collections of metabolites that, besides enzymes and a growth medium, are required to produce all biomass......MOTIVATION: Genome-scale metabolic network reconstructions have been established as a powerful tool for the prediction of cellular phenotypes and metabolic capabilities of organisms. In recent years, the number of network reconstructions has been constantly increasing, mostly because...... of the availability of novel (semi-)automated procedures, which enabled the reconstruction of metabolic models based on individual genomes and their annotation. The resulting models are widely used in numerous applications. However, the accuracy and predictive power of network reconstructions are commonly limited...

  8. Compute-and-forward on a line network with random access

    NARCIS (Netherlands)

    Ren, Zhijie; Goseling, Jasper; Weber, Jos; Gastpar, Michael

    2013-01-01

    Signal superposition and broadcast are important features of the wireless medium. Compute-and-Forward, also known as Physical Layer Network Coding (PLNC), is a technique exploiting these features in order to improve performance of wireless networks. More precisely, it allows wireless terminals to re

  9. Application Research of Computer Network Technology in Mining Railway Transport Management System

    Institute of Scientific and Technical Information of China (English)

    余静; 王振军; 才庆祥

    2002-01-01

    This paper discussed the necessity of establishing a computer network in a mining railway transport management system. The network structure and the system security design, associated with the real development condition of a mining area, were brought forward, and the system evaluation was given.

  10. Structural Reproduction of Social Networks in Computer-Mediated Communication Forums

    Science.gov (United States)

    Stefanone, M. A.; Gay, G.

    2008-01-01

    This study explores the relationship between the structure of an existing social network and the structure of an emergent discussion-board network in an undergraduate university class. Thirty-one students were issued with laptop computers that remained in their possession for the duration of the semester. While using these machines, participants'…

  11. A Comparison of the Educational Research Forum and Other Computer Networks.

    Science.gov (United States)

    Pierce, Jean W.

    Designed to assist educators in selecting a computer network, this paper contains a listing and description of seven networks that currently exist specifically for educators, and compares the quality of their services in the areas of accessibility, responsiveness, cost, text editing, humanization, guidance and documentation, control, forgiveness…

  12. Factors Impacting Adult Learner Achievement in a Technology Certificate Program on Computer Networks

    Science.gov (United States)

    Delialioglu, Omer; Cakir, Hasan; Bichelmeyer, Barbara A.; Dennis, Alan R.; Duffy, Thomas M.

    2010-01-01

    This study investigates the factors impacting the achievement of adult learners in a technology certificate program on computer networks. We studied 2442 participants in 256 institutions. The participants were older than age 18 and were enrolled in the Cisco Certified Network Associate (CCNA) technology training program as "non-degree" or…

  13. Can artificial neural networks provide an "expert's" view of medical students performances on computer based simulations?

    OpenAIRE

    Stevens, R. H.; K. Najafi

    1992-01-01

    Artificial neural networks were trained to recognize the test selection patterns of students' successful solutions to seven immunology computer based simulations. When new student's test selections were presented to the trained neural network, their problem solutions were correctly classified as successful or non-successful > 90% of the time. Examination of the neural networks output weights after each test selection revealed a progressive increase for the relevant problem suggesting that a s...

  14. A Social Network Approach to Provisioning and Management of Cloud Computing Services for Enterprises

    DEFF Research Database (Denmark)

    Kuada, Eric; Olesen, Henning

    2011-01-01

    will facilitate the adoption process of cloud computing services by enterprises. OCCS deals with the concept of enterprises taking advantage of cloud computing services to meet their business needs without having to pay or paying a minimal fee for the services. The OCCS network will be modelled and implemented...... as a social network of enterprises collaborating strategically for the provisioning and consumption of cloud computing services without entering into any business agreements. We conclude that it is possible to configure current cloud service technologies and management tools for OCCS but there is a need......This paper proposes a social network approach to the provisioning and management of cloud computing services termed Opportunistic Cloud Computing Services (OCCS), for enterprises; and presents the research issues that need to be addressed for its implementation. We hypothesise that OCCS...

  15. Computing the SKT Reliability of Acyclic Directed Networks Using Factoring Method

    Institute of Scientific and Technical Information of China (English)

    KONG Fanjia; WANG Guangxing

    1999-01-01

    This paper presents a factoringalgorithm for computing source-to-K terminal (SKT) reliability, the probability that a source s can send message to a specified set of terminals K, in acyclic directed networks (AD-networks) in which bothnodes and edges can fail. Based on Pivotal decomposition theorem, a newformula is derived for computing the SKT reliability of AD-networks. By establishing a topological property of AD-networks, it is shown that the SKT reliability of AD-networks can be computed by recursively applying this formula. Two new Reliability-Preserving Reductions are alsointroduced. The recursion tree generated by the presented algorithm hasat most 2(|V| - |K|- |C|) leaf nodes, where |V| and |K| are the numbers of nodes and terminals, respectively, while |C| is the number of the nodes satisfying some specified conditions. The computation complexity of the new algorithm is O (|E||V|2(|V| -|K| -|C|)) in the worst case, where |E| is the number of edges. Forsource-to-all-terminal (SAT) reliability, its computation complexity is O (|E|). Comparison of the new algorithm with the existing ones indicates that the new algorithm is more efficient for computing the SKT reliability of AD-networks.

  16. A Topology Designing System for a Computer Network

    Institute of Scientific and Technical Information of China (English)

    候正风

    1998-01-01

    In this paper,some problems on the topology design of network are discussed.An exact formula to calculate the delay of a line will be provided.In the design,the key problem is how to find some efficient heuristic algorithms.To solve this problem,a nonliner-discrete-capacity assignment heuristic and a hybrid perturbation heuristic are suggested.Then,a practical CAD system which helps design the topology of network will be introduced.

  17. Linear approximation model network and its formation via evolutionary computation

    Indian Academy of Sciences (India)

    Yun Li; Kay Chen Tan

    2000-04-01

    To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked through output or parameter interpolation. The linear models are valid for the entire operating trajectory and hence overcome the local validity of LMN models, which impose the predetermination of a scheduling variable that predicts characteristic changes of the nonlinear system. LAMs can be evolved fromsampled step response data directly, eliminating the need forlocal linearisation upon a pre-model using derivatives of the nonlinear system. The structural difference between a LAM network and an LMN isthat the overall model of the latteris a parameter-varying system and hence nonlinear,while the formerremains linear time-invariant (LTI). Hence, existing LTI and transfer function theory applies to a LAM network, which is therefore easy to use for control system design. Validation results show that the proposed method offers a simple, transparent and accurate multivariable modelling technique for nonlinear systems.

  18. Using Network Calculus to compute end-to-end delays in SpaceWire networks

    OpenAIRE

    Ferrandiz, Thomas; Frances, Fabrice; Fraboul, Christian

    2011-01-01

    The SpaceWire network standard is promoted by the ESA and is scheduled to be used as the sole on-board network for future satellites. This network uses a wormhole routing mechanism that can lead to packet blocking in routers and consequently to variable end-to-end delays. As the network will be shared by real-time and non real- time traffic, network designers require a tool to check that temporal constraints are verified for all the critical messages. Network Calculus can be used for evaluati...

  19. THE IMPROVEMENT OF COMPUTER NETWORK PERFORMANCE WITH BANDWIDTH MANAGEMENT IN KEMURNIAN II SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    Bayu Kanigoro

    2012-05-01

    Full Text Available This research describes the improvement of computer network performance with bandwidth management in Kemurnian II Senior High School. The main issue of this research is the absence of bandwidth division on computer, which makes user who is downloading data, the provided bandwidth will be absorbed by the user. It leads other users do not get the bandwidth. Besides that, it has been done IP address division on each room, such as computer, teacher and administration room for supporting learning process in Kemurnian II Senior High School, so wireless network is needed. The method is location observation and interview with related parties in Kemurnian II Senior High School, the network analysis has run and designed a new topology network including the wireless network along with its configuration and separation bandwidth on microtic router and its limitation. The result is network traffic on Kemurnian II Senior High School can be shared evenly to each user; IX and IIX traffic are separated, which improve the speed on network access at school and the implementation of wireless network.Keywords: Bandwidth Management; Wireless Network

  20. High performance computing network for cloud environment using simulators

    CERN Document Server

    Singh, N Ajith

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional data center or had to design a new application for the cloud computing environment. The security issue, latency, fault tolerance are some parameter which we need to keen care before deploying, all this we only know after deploying but by using simulation we can do the experiment before deploying it to real environment. By simulation we can understand the real environment of cloud computing and then after it successful result we can start deploying your application in cloud computing environment. By using the simulator it...

  1. Uncovering the 'Spy' Network: Is Spyware Watching Your Library Computers?

    Science.gov (United States)

    Ferrer, Daniel Fidel; Mead, Mary

    2003-01-01

    Describes spyware, discusses how it gets on a computer. Explains how spyware can be useful for parents, employers, and libraries. Discusses how spyware is more often used for others' gain or for surveillance without notification, how it can go undetected, and how libraries can help keep computers and patrons protected from remote installation of…

  2. Allocation Strategies of Virtual Resources in Cloud-Computing Networks

    Directory of Open Access Journals (Sweden)

    D.Giridhar Kumar

    2014-11-01

    Full Text Available In distributed computing, Cloud computing facilitates pay per model as per user demand and requirement. Collection of virtual machines including both computational and storage resources will form the Cloud. In Cloud computing, the main objective is to provide efficient access to remote and geographically distributed resources. Cloud faces many challenges, one of them is scheduling/allocation problem. Scheduling refers to a set of policies to control the order of work to be performed by a computer system. A good scheduler adapts its allocation strategy according to the changing environment and the type of task. In this paper we will see FCFS, Round Robin scheduling in addition to Linear Integer Programming an approach of resource allocation.

  3. A comparison of phylogenetic network methods using computer simulation.

    Directory of Open Access Journals (Sweden)

    Steven M Woolley

    Full Text Available BACKGROUND: We present a series of simulation studies that explore the relative performance of several phylogenetic network approaches (statistical parsimony, split decomposition, union of maximum parsimony trees, neighbor-net, simulated history recombination upper bound, median-joining, reduced median joining and minimum spanning network compared to standard tree approaches, (neighbor-joining and maximum parsimony in the presence and absence of recombination. PRINCIPAL FINDINGS: In the absence of recombination, all methods recovered the correct topology and branch lengths nearly all of the time when the substitution rate was low, except for minimum spanning networks, which did considerably worse. At a higher substitution rate, maximum parsimony and union of maximum parsimony trees were the most accurate. With recombination, the ability to infer the correct topology was halved for all methods and no method could accurately estimate branch lengths. CONCLUSIONS: Our results highlight the need for more accurate phylogenetic network methods and the importance of detecting and accounting for recombination in phylogenetic studies. Furthermore, we provide useful information for choosing a network algorithm and a framework in which to evaluate improvements to existing methods and novel algorithms developed in the future.

  4. Cloud Computing Application of Personal Information's Security in Network Sales-channels

    Directory of Open Access Journals (Sweden)

    Sun Qiong

    2013-07-01

    Full Text Available With the promotion of Internet sales, the security of personal information to network users have become increasingly demanding. The existing network of sales channels has personal information security risks, vulnerable to hacker attacking. Taking full advantage of cloud security management strategy, cloud computing security management model is introduced to the network sale of personal information security applications, which is to solve the problem of information leakage. Then we proposed membership-based cloud service provided selection policy. By exploring the prospects of cloud computing in Internet sales, we try to solve the problem of the security of personal information in this channel.

  5. A critical role for network structure in seizure onset: a computational modelling approach

    Directory of Open Access Journals (Sweden)

    George ePetkov

    2014-12-01

    Full Text Available Recent clinical work has implicated network structure as critically important in the initiation of seizures in people with idiopathic generalized epilepsies. In line with this idea, functional networks derived from the electroencephalogram (EEG at rest have been shown to be significantly different in people with generalized epilepsy compared to controls. In particular, the mean node degree of networks from the epilepsy cohort was found to be statistically significantly higher than those of controls. However, the mechanisms by which these network differences can support recurrent transitions into seizures remain unclear. In this study we use a computational model of the transition into seizure dynamics to explore the dynamic consequences of these differences in functional networks. We demonstrate that networks with higher mean node degree are more prone to generating seizure dynamics in the model and therefore suggest a mechanism by which increased mean node degree of brain networks can cause heightened ictogenicity.

  6. Correlation between Academic and Skills-Based Tests in Computer Networks

    Science.gov (United States)

    Buchanan, William

    2006-01-01

    Computing-related programmes and modules have many problems, especially related to large class sizes, large-scale plagiarism, module franchising, and an increased requirement from students for increased amounts of hands-on, practical work. This paper presents a practical computer networks module which uses a mixture of online examinations and a…

  7. Cloud and fog computing in 5G mobile networks emerging advances and applications

    CERN Document Server

    Markakis, Evangelos; Mavromoustakis, Constandinos X; Pallis, Evangelos

    2017-01-01

    This book focuses on the challenges and solutions related to cloud and fog computing for 5G mobile networks, and presents novel approaches to the frameworks and schemes that carry out storage, communication, computation and control in the fog/cloud paradigm.

  8. On Computing Compression Trees for Data Collection in Sensor Networks

    CERN Document Server

    Li, Jian; Khuller, Samir

    2009-01-01

    We address the problem of efficiently gathering correlated data from a wired or a wireless sensor network, with the aim of designing algorithms with provable optimality guarantees, and understanding how close we can get to the known theoretical lower bounds. Our proposed approach is based on finding an optimal or a near-optimal {\\em compression tree} for a given sensor network: a compression tree is a directed tree over the sensor network nodes such that the value of a node is compressed using the value of its parent. We consider this problem under different communication models, including the {\\em broadcast communication} model that enables many new opportunities for energy-efficient data collection. We draw connections between the data collection problem and a previously studied graph concept, called {\\em weakly connected dominating sets}, and we use this to develop novel approximation algorithms for the problem. We present comparative results on several synthetic and real-world datasets showing that our al...

  9. Computational Data Modeling for Network-Constrained Moving Objects

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Speicys, L.; Kligys, A.

    2003-01-01

    Advances in wireless communications, positioning technology, and other hardware technologies combine to enable a range of applications that use a mobile user’s geo-spatial data to deliver online, location-enhanced services, often referred to as location-based services. Assuming that the service...... users are constrained to a transportation network, this paper develops data structures that model road networks, the mobile users, and stationary objects of interest. The proposed framework encompasses two supplementary road network representations, namely a two-dimensional representation and a graph...... representation. These capture aspects of the problem domain that are required in order to support the querying that underlies the envisioned location-based services....

  10. An efficient algorithm for computing attractors of synchronous and asynchronous Boolean networks.

    Directory of Open Access Journals (Sweden)

    Desheng Zheng

    Full Text Available Biological networks, such as genetic regulatory networks, often contain positive and negative feedback loops that settle down to dynamically stable patterns. Identifying these patterns, the so-called attractors, can provide important insights for biologists to understand the molecular mechanisms underlying many coordinated cellular processes such as cellular division, differentiation, and homeostasis. Both synchronous and asynchronous Boolean networks have been used to simulate genetic regulatory networks and identify their attractors. The common methods of computing attractors are that start with a randomly selected initial state and finish with exhaustive search of the state space of a network. However, the time complexity of these methods grows exponentially with respect to the number and length of attractors. Here, we build two algorithms to achieve the computation of attractors in synchronous and asynchronous Boolean networks. For the synchronous scenario, combing with iterative methods and reduced order binary decision diagrams (ROBDD, we propose an improved algorithm to compute attractors. For another algorithm, the attractors of synchronous Boolean networks are utilized in asynchronous Boolean translation functions to derive attractors of asynchronous scenario. The proposed algorithms are implemented in a procedure called geneFAtt. Compared to existing tools such as genYsis, geneFAtt is significantly [Formula: see text] faster in computing attractors for empirical experimental systems.The software package is available at https://sites.google.com/site/desheng619/download.

  11. An Efficient Algorithm for Computing Attractors of Synchronous And Asynchronous Boolean Networks

    Science.gov (United States)

    Zheng, Desheng; Yang, Guowu; Li, Xiaoyu; Wang, Zhicai; Liu, Feng; He, Lei

    2013-01-01

    Biological networks, such as genetic regulatory networks, often contain positive and negative feedback loops that settle down to dynamically stable patterns. Identifying these patterns, the so-called attractors, can provide important insights for biologists to understand the molecular mechanisms underlying many coordinated cellular processes such as cellular division, differentiation, and homeostasis. Both synchronous and asynchronous Boolean networks have been used to simulate genetic regulatory networks and identify their attractors. The common methods of computing attractors are that start with a randomly selected initial state and finish with exhaustive search of the state space of a network. However, the time complexity of these methods grows exponentially with respect to the number and length of attractors. Here, we build two algorithms to achieve the computation of attractors in synchronous and asynchronous Boolean networks. For the synchronous scenario, combing with iterative methods and reduced order binary decision diagrams (ROBDD), we propose an improved algorithm to compute attractors. For another algorithm, the attractors of synchronous Boolean networks are utilized in asynchronous Boolean translation functions to derive attractors of asynchronous scenario. The proposed algorithms are implemented in a procedure called geneFAtt. Compared to existing tools such as genYsis, geneFAtt is significantly faster in computing attractors for empirical experimental systems. Availability The software package is available at https://sites.google.com/site/desheng619/download. PMID:23585840

  12. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks

    Science.gov (United States)

    Hargraves, Rosalyn Hobson

    2017-01-01

    Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model. PMID:28250804

  13. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks.

    Science.gov (United States)

    Chande, Ruchi D; Hargraves, Rosalyn Hobson; Ortiz-Robinson, Norma; Wayne, Jennifer S

    2017-01-01

    Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  14. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ruchi D. Chande

    2017-01-01

    Full Text Available Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  15. Diamond NV centers for quantum computing and quantum networks

    NARCIS (Netherlands)

    Childress, L.; Hanson, R.

    2013-01-01

    The exotic features of quantum mechanics have the potential to revolutionize information technologies. Using superposition and entanglement, a quantum processor could efficiently tackle problems inaccessible to current-day computers. Nonlocal correlations may be exploited for intrinsically secure co

  16. EVALUATION & TRENDS OF SURVEILLANCE SYSTEM NETWORK IN UBIQUITOUS COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Sunil Kr Singh

    2015-03-01

    Full Text Available With the emergence of ubiquitous computing, whole scenario of computing has been changed. It affected many inter disciplinary fields. This paper visions the impact of ubiquitous computing on video surveillance system. With increase in population and highly specific security areas, intelligent monitoring is the major requirement of modern world .The paper describes the evolution of surveillance system from analog to multi sensor ubiquitous system. It mentions the demand of context based architectures. It draws the benefit of merging of cloud computing to boost the surveillance system and at the same time reducing cost and maintenance. It analyzes some surveillance system architectures which are made for ubiquitous deployment. It provides major challenges and opportunities for the researchers to make surveillance system highly efficient and make them seamlessly embed in our environments.

  17. AN EVALUATION AND IMPLEMENTATION OF COLLABORATIVE AND SOCIAL NETWORKING TECHNOLOGIES FOR COMPUTER EDUCATION

    Directory of Open Access Journals (Sweden)

    Ronnie Cheung

    2011-06-01

    Full Text Available We have developed a collaborative and social networking environment that integrates the knowledge and skills in communication and computing studies with a multimedia development project. The outcomes of the students’ projects show that computer literacy can be enhanced through a cluster of communication, social, and digital skills. Experience in implementing a web-based social networking environment shows that the new media is an effective means of enriching knowledge by sharing in computer literacy projects. The completed assignments, projects, and self-reflection reports demonstrate that the students were able to achieve the learning outcomes of a computer literacy course in multimedia development. The students were able to assess the effectiveness of a variety of media through the development of media presentations in a web-based, social-networking environment. In the collaborative and social-networking environment, students were able to collaborate and communicate with their team members to solve problems, resolve conflicts, make decisions, and work as a team to complete tasks. Our experience has shown that social networking environments are effective for computer literacy education, and the development of the new media is emerging as the core knowledge for computer literacy education.

  18. The Implications of Pervasive Computing on Network Design

    Science.gov (United States)

    Briscoe, R.

    Mark Weiser's late-1980s vision of an age of calm technology with pervasive computing disappearing into the fabric of the world [1] has been tempered by an industry-driven vision with more of a feel of conspicuous consumption. In the modified version, everyone carries around consumer electronics to provide natural, seamless interactions both with other people and with the information world, particularly for eCommerce, but still through a pervasive computing fabric.

  19. Multiscale approach for bone remodeling simulation based on finite element and neural network computation

    CERN Document Server

    Hambli, Ridha

    2011-01-01

    The aim of this paper is to develop a multiscale hierarchical hybrid model based on finite element analysis and neural network computation to link mesoscopic scale (trabecular network level) and macroscopic (whole bone level) to simulate bone remodelling process. Because whole bone simulation considering the 3D trabecular level is time consuming, the finite element calculation is performed at macroscopic level and a trained neural network are employed as numerical devices for substituting the finite element code needed for the mesoscale prediction. The bone mechanical properties are updated at macroscopic scale depending on the morphological organization at the mesoscopic computed by the trained neural network. The digital image-based modeling technique using m-CT and voxel finite element mesh is used to capture 2 mm3 Representative Volume Elements at mesoscale level in a femur head. The input data for the artificial neural network are a set of bone material parameters, boundary conditions and the applied str...

  20. Computer Network Security Technology%浅谈计算机网络安全技术

    Institute of Scientific and Technical Information of China (English)

    梁其烺

    2011-01-01

    从当前计算机网络安全现状入手,对主要的网络安全威胁进行了讨论。最后分析了计算机网络安全技术的类型,力图使网络设计者和使用者对网络安全有一个全面的认识,从而能正确采用成功对策。%The present situation of the current computer network security,network security of the main threats were discussed,the final analysis,the type of computer network security technology to try to make the network designers and users of network security with a comprehensive understanding, so that it can correctly the use of successful strategies.

  1. New Model of Network- a Future Aspect of the Computer Networks

    CERN Document Server

    Singh, Ram Kumar

    2009-01-01

    As the number and size of the Network increases, the deficiencies persist, including network security problems. But there is no shortage of technologies offered as universal remedy - EIGRP,BGP, OSPF, VoIP, IPv6, IPTV, MPLS, WiFi, to name a few. There are multiple factors for the current situation. Now a day during emergent and blossoming stages of network development is no longer sufficient when the networks are mature and have become everyday tool for social and business interactions. A new model of network is necessary to find solutions for today's pressing problems, especially those related to network security. In this paper out factors leading to current stagnation discusses critical assumptions behind current networks, how many of them are no longer valid and have become barriers for implementing real solutions. The paper concludes by offering new directions for future needs and solving current challenges.

  2. Condor-COPASI: high-throughput computing for biochemical networks

    Directory of Open Access Journals (Sweden)

    Kent Edward

    2012-07-01

    Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.

  3. Neural networks and neuroscience-inspired computer vision.

    Science.gov (United States)

    Cox, David Daniel; Dean, Thomas

    2014-09-22

    Brains are, at a fundamental level, biological computing machines. They transform a torrent of complex and ambiguous sensory information into coherent thought and action, allowing an organism to perceive and model its environment, synthesize and make decisions from disparate streams of information, and adapt to a changing environment. Against this backdrop, it is perhaps not surprising that computer science, the science of building artificial computational systems, has long looked to biology for inspiration. However, while the opportunities for cross-pollination between neuroscience and computer science are great, the road to achieving brain-like algorithms has been long and rocky. Here, we review the historical connections between neuroscience and computer science, and we look forward to a new era of potential collaboration, enabled by recent rapid advances in both biologically-inspired computer vision and in experimental neuroscience methods. In particular, we explore where neuroscience-inspired algorithms have succeeded, where they still fail, and we identify areas where deeper connections are likely to be fruitful.

  4. LAN Ho! A Guide to Networking Personal Computers.

    Science.gov (United States)

    Daly, Kevin F.

    1993-01-01

    Provides examples of common administrative tasks in school district offices that can be expedited using an administrative local area network (LAN). Explains how districts should develop a master plan installing a LAN. Figures display the LAN components, planning sheets, and an electrical requirement calculator chart. Discusses site preparation and…

  5. Theoretical Investigation of Optical Computing Based on Neural Network Models.

    Science.gov (United States)

    1987-09-29

    associated output vectors ym. Alternatively, error driven algorithms such as the perceptron or adaline can be used to iteratively train the memory by...from which the state of the entire network can be calculated). The perceptron [21] and adaline [221 algorithms are examples of error driven learning

  6. Computational analyses of synergism in small molecular network motifs.

    Directory of Open Access Journals (Sweden)

    Yili Zhang

    2014-03-01

    Full Text Available Cellular functions and responses to stimuli are controlled by complex regulatory networks that comprise a large diversity of molecular components and their interactions. However, achieving an intuitive understanding of the dynamical properties and responses to stimuli of these networks is hampered by their large scale and complexity. To address this issue, analyses of regulatory networks often focus on reduced models that depict distinct, reoccurring connectivity patterns referred to as motifs. Previous modeling studies have begun to characterize the dynamics of small motifs, and to describe ways in which variations in parameters affect their responses to stimuli. The present study investigates how variations in pairs of parameters affect responses in a series of ten common network motifs, identifying concurrent variations that act synergistically (or antagonistically to alter the responses of the motifs to stimuli. Synergism (or antagonism was quantified using degrees of nonlinear blending and additive synergism. Simulations identified concurrent variations that maximized synergism, and examined the ways in which it was affected by stimulus protocols and the architecture of a motif. Only a subset of architectures exhibited synergism following paired changes in parameters. The approach was then applied to a model describing interlocked feedback loops governing the synthesis of the CREB1 and CREB2 transcription factors. The effects of motifs on synergism for this biologically realistic model were consistent with those for the abstract models of single motifs. These results have implications for the rational design of combination drug therapies with the potential for synergistic interactions.

  7. Online Social Networks and Computer Skills of University Students

    Science.gov (United States)

    Barbas, Maria Potes; Valerio, Gabriel; Rodríguez-Martínez, María del Carmen; Herrera-Murillo, Dagoberto José; Belmonte-Jiménez, Ana María

    2014-01-01

    Currently a large number of college students belong to social networks and spend several hours a week on them. Some sectors of society, like parents and teachers, are concerned about the negative impact on their academic work and in their personal lives. However, because the potential positive impacts have not been explored enough, this research…

  8. Computers and networks in the age of globalization

    DEFF Research Database (Denmark)

    Bloch Rasmussen, Leif; Beardon, Colin; Munari, Silvio

    In modernity, an individual identity was constituted from civil society, while in a globalized network society, human identity, if it develops at all, must grow from communal resistance. A communal resistance to an abstract conceptualized world, where there is no possibility for perception and ex...

  9. Human Inspired Self-developmental Model of Neural Network (HIM): Introducing Content/Form Computing

    Science.gov (United States)

    Krajíček, Jiří

    This paper presents cross-disciplinary research between medical/psychological evidence on human abilities and informatics needs to update current models in computer science to support alternative methods for computation and communication. In [10] we have already proposed hypothesis introducing concept of human information model (HIM) as cooperative system. Here we continue on HIM design in detail. In our design, first we introduce Content/Form computing system which is new principle of present methods in evolutionary computing (genetic algorithms, genetic programming). Then we apply this system on HIM (type of artificial neural network) model as basic network self-developmental paradigm. Main inspiration of our natural/human design comes from well known concept of artificial neural networks, medical/psychological evidence and Sheldrake theory of "Nature as Alive" [22].

  10. APINetworks: A general API for the treatment of complex networks in arbitrary computational environments

    Science.gov (United States)

    Niño, Alfonso; Muñoz-Caro, Camelia; Reyes, Sebastián

    2015-11-01

    The last decade witnessed a great development of the structural and dynamic study of complex systems described as a network of elements. Therefore, systems can be described as a set of, possibly, heterogeneous entities or agents (the network nodes) interacting in, possibly, different ways (defining the network edges). In this context, it is of practical interest to model and handle not only static and homogeneous networks but also dynamic, heterogeneous ones. Depending on the size and type of the problem, these networks may require different computational approaches involving sequential, parallel or distributed systems with or without the use of disk-based data structures. In this work, we develop an Application Programming Interface (APINetworks) for the modeling and treatment of general networks in arbitrary computational environments. To minimize dependency between components, we decouple the network structure from its function using different packages for grouping sets of related tasks. The structural package, the one in charge of building and handling the network structure, is the core element of the system. In this work, we focus in this API structural component. We apply an object-oriented approach that makes use of inheritance and polymorphism. In this way, we can model static and dynamic networks with heterogeneous elements in the nodes and heterogeneous interactions in the edges. In addition, this approach permits a unified treatment of different computational environments. Tests performed on a C++11 version of the structural package show that, on current standard computers, the system can handle, in main memory, directed and undirected linear networks formed by tens of millions of nodes and edges. Our results compare favorably to those of existing tools.

  11. Designing a Versatile Dedicated Computing Lab to Support Computer Network Courses: Insights from a Case Study

    Science.gov (United States)

    Gercek, Gokhan; Saleem, Naveed

    2006-01-01

    Providing adequate computing lab support for Management Information Systems (MIS) and Computer Science (CS) programs is a perennial challenge for most academic institutions in the US and abroad. Factors, such as lack of physical space, budgetary constraints, conflicting needs of different courses, and rapid obsolescence of computing technology,…

  12. Synthetic tetracycline-inducible regulatory networks: computer-aided design of dynamic phenotypes

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2007-01-01

    Full Text Available Abstract Background Tightly regulated gene networks, precisely controlling the expression of protein molecules, have received considerable interest by the biomedical community due to their promising applications. Among the most well studied inducible transcription systems are the tetracycline regulatory expression systems based on the tetracycline resistance operon of Escherichia coli, Tet-Off (tTA and Tet-On (rtTA. Despite their initial success and improved designs, limitations still persist, such as low inducer sensitivity. Instead of looking at these networks statically, and simply changing or mutating the promoter and operator regions with trial and error, a systematic investigation of the dynamic behavior of the network can result in rational design of regulatory gene expression systems. Sophisticated algorithms can accurately capture the dynamical behavior of gene networks. With computer aided design, we aim to improve the synthesis of regulatory networks and propose new designs that enable tighter control of expression. Results In this paper we engineer novel networks by recombining existing genes or part of genes. We synthesize four novel regulatory networks based on the Tet-Off and Tet-On systems. We model all the known individual biomolecular interactions involved in transcription, translation, regulation and induction. With multiple time-scale stochastic-discrete and stochastic-continuous models we accurately capture the transient and steady state dynamics of these networks. Important biomolecular interactions are identified and the strength of the interactions engineered to satisfy design criteria. A set of clear design rules is developed and appropriate mutants of regulatory proteins and operator sites are proposed. Conclusion The complexity of biomolecular interactions is accurately captured through computer simulations. Computer simulations allow us to look into the molecular level, portray the dynamic behavior of gene regulatory

  13. Computational methods to dissect cis-regulatory transcriptional networks

    Indian Academy of Sciences (India)

    Vibha Rani

    2007-12-01

    The formation of diverse cell types from an invariant set of genes is governed by biochemical and molecular processes that regulate gene activity. A complete understanding of the regulatory mechanisms of gene expression is the major function of genomics. Computational genomics is a rapidly emerging area for deciphering the regulation of metazoan genes as well as interpreting the results of high-throughput screening. The integration of computer science with biology has expedited molecular modelling and processing of large-scale data inputs such as microarrays, analysis of genomes, transcriptomes and proteomes. Many bioinformaticians have developed various algorithms for predicting transcriptional regulatory mechanisms from the sequence, gene expression and interaction data. This review contains compiled information of various computational methods adopted to dissect gene expression pathways.

  14. Computation and evaluation of scheduled waiting time for railway networks

    DEFF Research Database (Denmark)

    Landex, Alex

    2010-01-01

    Timetables are affected by scheduled waiting time (SWT) that prolongs the travel times for trains and thereby passengers. SWT occurs when a train hinders another train to run with the wanted speed. The SWT affects both the trains and the passengers in the trains. The passengers may be further aff...... timetable by analysing different timetables and/or plans of operation. This article presents methods to examine SWT by simulation for both trains and passengers in entire railway networks....... affected due to longer transfer times to other trains. SWT can be estimated analytically for a given timetable or by simulation of timetables and/or plans of operation. The simulation of SWT has the benefit that it is possible to examine the entire network. This makes it possible to improve the future...

  15. Research on Computer English Teaching based on WEB Network

    Institute of Scientific and Technical Information of China (English)

    Liwen WANG

    2015-01-01

    With the continuous progress of the integration of information technology and English teaching course under the network environment, the traditional teaching mode cannot meet to listening teaching resources requirement, Web listening corpus is the main way to solve the problem. The paper design the small Web corpus for university English listening teaching development based on the introduction of Web corpus, and this corpus has realized to the listening material retrieval, browsing, online listening watch and update functions.

  16. Computational intelligent methods for trusting in social networks

    OpenAIRE

    Nuñez González, José David

    2016-01-01

    104 p. This Thesis covers three research lines of Social Networks. The first proposed reseach line is related with Trust. Different ways of feature extraction are proposed for Trust Prediction comparing results with classic methods. The problem of bad balanced datasets is covered in this work. The second proposed reseach line is related with Recommendation Systems. Two experiments are proposed in this work. The first experiment is about recipe generation with a bread machine. The second ex...

  17. Computational Analysis of Optical Neural Network Models to Weather Forecasting

    OpenAIRE

    A. C. Subhajini; V. Joseph Raj

    2010-01-01

    Neural networks have been in use in numerous meteorological applications including weather forecasting. They are found to be more powerful than any traditional expert system in the classification of meteorological patterns, in performing pattern classification tasks as they learn from examples without explicitly stating the rules and being non linear they solve complex problems more than linear techniques. A weather forecasting problem - rain fall estimation has been experimented using differ...

  18. Data identification for improving gene network inference using computational algebra.

    Science.gov (United States)

    Dimitrova, Elena; Stigler, Brandilyn

    2014-11-01

    Identification of models of gene regulatory networks is sensitive to the amount of data used as input. Considering the substantial costs in conducting experiments, it is of value to have an estimate of the amount of data required to infer the network structure. To minimize wasted resources, it is also beneficial to know which data are necessary to identify the network. Knowledge of the data and knowledge of the terms in polynomial models are often required a priori in model identification. In applications, it is unlikely that the structure of a polynomial model will be known, which may force data sets to be unnecessarily large in order to identify a model. Furthermore, none of the known results provides any strategy for constructing data sets to uniquely identify a model. We provide a specialization of an existing criterion for deciding when a set of data points identifies a minimal polynomial model when its monomial terms have been specified. Then, we relax the requirement of the knowledge of the monomials and present results for model identification given only the data. Finally, we present a method for constructing data sets that identify minimal polynomial models.

  19. Sandia`s network for SC `97: Supporting visualization, distributed cluster computing, and production data networking with a wide area high performance parallel asynchronous transfer mode (ATM) network

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, T.J.; Martinez, L.G.; Vahle, M.O.; Archuleta, T.V.; Williams, V.K.

    1998-05-01

    The advanced networking department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past several years as a forum to demonstrate and focus communication and networking developments. At SC `97, Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL), and Lawrence Livermore National Laboratory (LLNL) combined their SC `97 activities within a single research booth under the Advance Strategic Computing Initiative (ASCI) banner. For the second year in a row, Sandia provided the network design and coordinated the networking activities within the booth. At SC `97, Sandia elected to demonstrate the capability of the Computation Plant, the visualization of scientific data, scalable ATM encryption, and ATM video and telephony capabilities. At SC `97, LLNL demonstrated an application, called RIPTIDE, that also required significant networking resources. The RIPTIDE application had computational visualization and steering capabilities. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia`s overall strategies in ATM networking.

  20. Computational network pharmacological research of Chinese medicinal plants for chronic kidney disease

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The interaction between drug molecules and target proteins is the basis of pharmacological action.The pharmacodynamic mechanism of Chinese medicinal plants for chronic kidney disease(CKD) was studied by molecular docking and complex network analysis.It was found that the interaction network of components-proteins of Chinese medicinal plants is different from the interaction network of components-proteins of drugs.The action mechanism of Chinese medicinal plants is different from that of drugs.We also found the interaction network of components-proteins of tonifying herbs is different from the interaction network of components-proteins of evil expelling herbs using complex network research approach.It illuminates the ancient classification theory of Chinese medicinal plants.This computational approach could identify the pivotal components of Chinese medicinal plants and their key target proteins rapidly.The results provide data for development of multi-component Chinese medicine.

  1. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  2. Can artificial neural networks provide an "expert's" view of medical students performances on computer based simulations?

    Science.gov (United States)

    Stevens, R H; Najafi, K

    1992-01-01

    Artificial neural networks were trained to recognize the test selection patterns of students' successful solutions to seven immunology computer based simulations. When new student's test selections were presented to the trained neural network, their problem solutions were correctly classified as successful or non-successful > 90% of the time. Examination of the neural networks output weights after each test selection revealed a progressive increase for the relevant problem suggesting that a successful solution was represented by the neural network as the accumulation of relevant tests. Unsuccessful problem solutions revealed two patterns of students performances. The first pattern was characterized by low neural network output weights for all seven problems reflecting extensive searching and lack of recognition of relevant information. In the second pattern, the output weights from the neural network were biased towards one of the remaining six incorrect problems suggesting that the student mis-represented the current problem as an instance of a previous problem.

  3. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    Science.gov (United States)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  4. Simulation and Noise Analysis of Multimedia Transmission in Optical CDMA Computer Networks

    Directory of Open Access Journals (Sweden)

    Nasaruddin

    2009-11-01

    Full Text Available This paper simulates and analyzes noise of multimedia transmission in a flexible optical code division multiple access (OCDMA computer network with different quality of service (QoS requirements. To achieve multimedia transmission in OCDMA, we have proposed strict variable-weight optical orthogonal codes (VW-OOCs, which can guarantee the smallest correlation value of one by the optimal design. In developing multimedia transmission for computer network, a simulation tool is essential in analyzing the effectiveness of various transmissions of services. In this paper, implementation models are proposed to analyze the multimedia transmission in the representative of OCDMA computer networks by using MATLAB simulink tools. Simulation results of the models are discussed including spectrums outputs of transmitted signals, superimposed signals, received signals, and eye diagrams with and without noise. Using the proposed models, multimedia OCDMA computer network using the strict VW-OOC is practically evaluated. Furthermore, system performance is also evaluated by considering avalanche photodiode (APD noise and thermal noise. The results show that the system performance depends on code weight, received laser power, APD noise, and thermal noise which should be considered as important parameters to design and implement multimedia transmission in OCDMA computer networks.

  5. Simulation and Noise Analysis of Multimedia Transmission in Optical CDMA Computer Networks

    Directory of Open Access Journals (Sweden)

    Nasaruddin Nasaruddin

    2013-09-01

    Full Text Available This paper simulates and analyzes noise of multimedia transmission in a flexible optical code division multiple access (OCDMA computer network with different quality of service (QoS requirements. To achieve multimedia transmission in OCDMA, we have proposed strict variable-weight optical orthogonal codes (VW-OOCs, which can guarantee the smallest correlation value of one by the optimal design. In developing multimedia transmission for computer network, a simulation tool is essential in analyzing the effectiveness of various transmissions of services. In this paper, implementation models are proposed to analyze the multimedia transmission in the representative of OCDMA computer networks by using MATLAB simulink tools. Simulation results of the models are discussed including spectrums outputs of transmitted signals, superimposed signals, received signals, and eye diagrams with and without noise. Using the proposed models, multimedia OCDMA computer network using the strict VW-OOC is practically evaluated. Furthermore, system performance is also evaluated by considering avalanche photodiode (APD noise and thermal noise. The results show that the system performance depends on code weight, received laser power, APD noise, and thermal noise which should be considered as important parameters to design and implement multimedia transmission in OCDMA computer networks.

  6. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network.

    Science.gov (United States)

    Goto, Hayato

    2016-02-22

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.

  7. Innovations and advances in computing, informatics, systems sciences, networking and engineering

    CERN Document Server

    Elleithy, Khaled

    2015-01-01

    Innovations and Advances in Computing, Informatics, Systems Sciences, Networking and Engineering  This book includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Informatics, and Systems Sciences, and Engineering. It includes selected papers from the conference proceedings of the Eighth and some selected papers of the Ninth International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2012 & CISSE 2013). Coverage includes topics in: Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.  ·       Provides the latest in a series of books growing out of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering; ·       Includes chapters in the most a...

  8. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network

    Science.gov (United States)

    Goto, Hayato

    2016-02-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.

  9. Delays and user performance in human-computer-network interaction tasks.

    Science.gov (United States)

    Caldwell, Barrett S; Wang, Enlie

    2009-12-01

    This article describes a series of studies conducted to examine factors affecting user perceptions, responses, and tolerance for network-based computer delays affecting distributed human-computer-network interaction (HCNI) tasks. HCNI tasks, even with increasing computing and network bandwidth capabilities, are still affected by human perceptions of delay and appropriate waiting times for information flow latencies. Conducted were 6 laboratory studies with university participants in China (Preliminary Experiments 1 through 3) and the United States (Experiments 4 through 6) to examine users' perceptions of elapsed time, effect of perceived network task performance partners on delay tolerance, and expectations of appropriate delays based on task, situation, and network conditions. Results across the six experiments indicate that users' delay tolerance and estimated delay were affected by multiple task and expectation factors, including task complexity and importance, situation urgency and time availability, file size, and network bandwidth capacity. Results also suggest a range of user strategies for incorporating delay tolerance in task planning and performance. HCNI user experience is influenced by combinations of task requirements, constraints, and understandings of system performance; tolerance is a nonlinear function of time constraint ratios or decay. Appropriate user interface tools providing delay feedback information can help modify user expectations and delay tolerance. These tools are especially valuable when delay conditions exceed a few seconds or when task constraints and system demands are high. Interface designs for HCNI tasks should consider assistant-style presentations of delay feedback, information freshness, and network characteristics. Assistants should also gather awareness of user time constraints.

  10. Modelling and optimization of computer network traffic controllers

    Directory of Open Access Journals (Sweden)

    N. U. Ahmed

    2005-01-01

    operation of the controller and evaluate the benefits of using a genetic algorithm approach to speed up the optimization process. Our results show that the use of the genetic algorithm proves particularly useful in reducing the computation time required to optimize the operation of a system consisting of multiple token-bucket-regulated sources.

  11. Contention Bounds for Combinations of Computation Graphs and Network Topologies

    Science.gov (United States)

    2014-08-08

    Google, Nokia , NVIDIA, Oracle, MathWorks and Samsung. Also funded by U.S. DOE Office of Science, Office of Advanced Scientific Computing Research...program sponsored by MARCO and DARPA, and ASPIRE Lab industrial sponsors and affiliates Intel, Google, Nokia , NVIDIA, Oracle, MathWorks and Samsung

  12. Computation in Networks of Passively Mobile Finite-State Sensors

    Science.gov (United States)

    2004-02-23

    in the case of conjugating automata. The Chemical Abstract Machine of Berry and Boudol [2] is an abstract machine designed to model a situation in...2003. [2] G. Berry and G. Boudol. The Chemical Abstract Machine. Theoretical Computer Science, 96:217–248, 1992. [3] D. Brand and P. Zafiropulo. On

  13. State of the Art of Network Security Perspectives in Cloud Computing

    Science.gov (United States)

    Oh, Tae Hwan; Lim, Shinyoung; Choi, Young B.; Park, Kwang-Roh; Lee, Heejo; Choi, Hyunsang

    Cloud computing is now regarded as one of social phenomenon that satisfy customers' needs. It is possible that the customers' needs and the primary principle of economy - gain maximum benefits from minimum investment - reflects realization of cloud computing. We are living in the connected society with flood of information and without connected computers to the Internet, our activities and work of daily living will be impossible. Cloud computing is able to provide customers with custom-tailored features of application software and user's environment based on the customer's needs by adopting on-demand outsourcing of computing resources through the Internet. It also provides cloud computing users with high-end computing power and expensive application software package, and accordingly the users will access their data and the application software where they are located at the remote system. As the cloud computing system is connected to the Internet, network security issues of cloud computing are considered as mandatory prior to real world service. In this paper, survey and issues on the network security in cloud computing are discussed from the perspective of real world service environments.

  14. The super-Turing computational power of plastic recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  15. A comparative analysis on computational methods for fitting an ERGM to biological network data

    Directory of Open Access Journals (Sweden)

    Sudipta Saha

    2015-03-01

    Full Text Available Exponential random graph models (ERGM based on graph theory are useful in studying global biological network structure using its local properties. However, computational methods for fitting such models are sensitive to the type, structure and the number of the local features of a network under study. In this paper, we compared computational methods for fitting an ERGM with local features of different types and structures. Two commonly used methods, such as the Markov Chain Monte Carlo Maximum Likelihood Estimation and the Maximum Pseudo Likelihood Estimation are considered for estimating the coefficients of network attributes. We compared the estimates of observed network to our random simulated network using both methods under ERGM. The motivation was to ascertain the extent to which an observed network would deviate from a randomly simulated network if the physical numbers of attributes were approximately same. Cut-off points of some common attributes of interest for different order of nodes were determined through simulations. We implemented our method to a known regulatory network database of Escherichia coli (E. coli.

  16. Computer networking for solar energy uses; Computer-Vernetzung im Dienste der Sonnenenergie

    Energy Technology Data Exchange (ETDEWEB)

    Horn, A. [Ing.-Buero Solar, Energie, Information, Sauerlach (Germany)

    1995-12-31

    Modern personal computers are more than sophisticated typewriters or pocket computers. Their inherent information exchange possibilities, for example, can be made use of by simply connecting computers and telephones by means of interfaces, i.e. by modems. Modems transform the computer interface signals into sounds which are transported via the telephone line and which are reconverted into interface signals by the receiving modem. Provided a suitable software, computers with 24-hour standby modems can serve as mailboxes. (orig./HW) [Deutsch] Heutige Computer (PC) sind mehr als bessere Schreibmaschinen oder Taschenrechner. Sie schaffen auch neue Moeglichkeiten des Informationsaustausches. Fuer diese Funktion wird nicht mehr benoetigt als ein Koppelgeraet zwischen Computer und Telefon, Modem genannt. Dieses uebersetzt die Schnittstellensignale des Computers in Toene, die per Telefonleitung transportiert werden und die das Modem des Empfaengers wieder in Schnittstellensignale zurueckwandelt. Ein Computer mit Modem, der 24 Stunden taeglich am Telefonnetz in Bereitschaft steht, kann mit geeigneter Software bereits alle Funktionen einer Mailbox bieten. (orig./HW)

  17. High Performance Commodity Networking in a 512-CPU Teraflop Beowulf Cluster for Computational Astrophysics

    CERN Document Server

    Dubinski, J; Pen, U L; Loken, C; Martin, P; Dubinski, John; Humble, Robin; Loken, Chris; Martin, Peter; Pen, Ue-Li

    2003-01-01

    We describe a new 512-CPU Beowulf cluster with Teraflop performance dedicated to problems in computational astrophysics. The cluster incorporates a cubic network topology based on inexpensive commodity 24-port gigabit switches and point to point connections through the second gigabit port on each Linux server. This configuration has network performance competitive with more expensive cluster configurations and is scaleable to much larger systems using other network topologies. Networking represents only about 9% of our total system cost of USD$561K. The standard Top 500 HPL Linpack benchmark rating is 1.202 Teraflops on 512 CPUs so computing costs by this measure are $0.47/Megaflop. We also describe 4 different astrophysical applications using complex parallel algorithms for studying large-scale structure formation, galaxy dynamics, magnetohydrodynamic flows onto blackholes and planet formation currently running on the cluster and achieving high parallel performance. The MHD code achieved a sustained speed of...

  18. A new framework to integrate wireless sensor networks with cloud computing

    Science.gov (United States)

    Shah, Sajjad Hussain; Khan, Fazle Kabeer; Ali, Wajid; Khan, Jamshed

    Wireless sensors networks have several applications of their own. These applications can further enhanced by integrating a local wireless sensor network to internet, which can be used in real time applications where the results of sensors are stored on the cloud. We propose an architecture that integrates a wireless sensor network to the internet using cloud technology. The resultant system is proved to be reliable, available and extensible. In this paper a new framework is proposed for WSN integration with Cloud computing model, existing WSN will be connected to the proposed framework. Three deployment layer are used to serve user request (IaaS, PaaS, SaaS) either from the library which is made from data collected from data centric DC by WSN periodically. The integration controller unit of the proposed framework integrates the sensor network and cloud computing technology which offers reliability, availability and extensibility.

  19. Quantum perceptron over a field and neural network architecture selection in a quantum computer.

    Science.gov (United States)

    da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa

    2016-04-01

    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator.

  20. TECHNOLOGY ENSURING PROTECTION OF COMPUTER NETWORKS USING THE MEANS VARIABLE IN COMPOSITION

    Directory of Open Access Journals (Sweden)

    Nadezhda Evgenyevna GALASHINA

    2015-01-01

    Full Text Available The article describes the dependence of steady operation of computer network on a deliberate change of operation modes of software and hardware.The authors consider the technologies ensuring computer network security with the use of the means variable in time and with the use of the means variable in nomenclature. Structure of the technology ensuring protection of computer networks using the softwares compatible in nomenclature has been worked through. To verify the practical implementation of the nomenclature method using two programs BitLocker and TrueCrypt was used the virtualization program VMware Workstation 11 with the operating system Microsoft Windows 7 Enterprise without a TPM. 

  1. Analysis of Various Computer System Monitoring and LCD Projector through the Network TCP/IP

    Directory of Open Access Journals (Sweden)

    Santoso Budijono

    2015-09-01

    Full Text Available Many electronic devices have a network connection facility. Projectors today have network facilities to bolster its customer satisfaction in everyday use. By using a device that can be controlled, the expected availability and reliability of the presentation system (computer and projector can be maintained to keep itscondition ready to use for presentation. Nevertheless, there is still a projector device that has no network facilities so that the necessary additional equipment with expensive price. Besides, control equipment in large quantities has problems in timing and the number of technicians in performing controls. This study began with study of literature, from searching for the projectors that has LAN and software to control and finding a number of computer control softwares where the focus is easy to use and affordable. Result of this research is creating asystem which contains suggestions of procurement of computer hardware, hardware and software projectors each of which can be controlled centrally from a distance.

  2. Computer Network Security Research%计算机网络安全研究

    Institute of Scientific and Technical Information of China (English)

    李小瓦

    2012-01-01

    本文就从计算机网络安全的特点入手,对计算机网络安全的结构及病毒的传播方式进行分析,找出当前计算机网络中存在的问题;通过现代的密码技术、防火墙技术等提出了行之有效的解决措施。%In this paper, starting from the characteristics of computer network security, and to analyze the structure of the computer network security and the spread of the virus,to identify problems in the current computer network;modem cryptographic techniques and firewall technology effective solutions.

  3. Analysis of Various Computer System Monitoring and LCD Projector through the Network TCP/IP

    Directory of Open Access Journals (Sweden)

    Santoso Budijono

    2015-12-01

    Full Text Available Many electronic devices have a network connection facility. Projectors today have network facilities to bolster its customer satisfaction in everyday use. By using a device that can be controlled, the expected availability and reliability of the presentation system (computer and projector can be maintained to keep its condition ready to use for presentation. Nevertheless, there is still a projector device that has no network facilities so that the necessary additional equipment with expensive price. Besides, control equipment in large quantities has problems in timing and the number of technicians in performing controls. This study began with study of literature, from searching for the projectors that has LAN and software to control and finding a number of computer control softwares where the focus is easy to use and affordable. Result of this research is creating a system which contains suggestions of procurement of computer hardware, hardware and software projectors each of which can be controlled centrally from a distance.

  4. Optimum feedback strategy for access control mechanism modelled as stochastic differential equation in computer network

    Directory of Open Access Journals (Sweden)

    Ahmed N. U.

    2004-01-01

    Full Text Available We consider optimum feedback control strategy for computer communication network, in particular, the access control mechanism. The dynamic model representing the source and the access control system is described by a system of stochastic differential equations developed in our previous works. Simulated annealing (SA was used to optimize the parameters of the control law based on neural network. This technique was found to be computationally intensive. In this paper, we have proposed to use a more powerful algorithm known as recursive random search (RRS. By using this technique, we have been able to reduce the computation time by a factor of five without compromising the optimality. This is very important for optimization of high-dimensional systems serving a large number of aggregate users. The results show that the proposed control law can improve the network performance by improving throughput, reducing multiplexor and TB losses, and relaxing, not avoiding, congestion.

  5. Cost Optimization of Cloud Computing Services in a Networked Environment

    Directory of Open Access Journals (Sweden)

    Eli WEINTRAUB

    2015-04-01

    Full Text Available Cloud computing service providers' offer their customers' services maximizing their revenues, whereas customers wish to minimize their costs. In this paper we shall concentrate on consumers' point of view. Cloud computing services are composed of services organized according to a hierarchy of software application services, beneath them platform services which also use infrastructure services. Providers currently offer software services as bundles consisting of services which include the software, platform and infrastructure services. Providers also offer platform services bundled with infrastructure services. Bundling services prevent customers from splitting their service purchases between a provider of software and a different provider of the underlying platform or infrastructure. This bundling policy is likely to change in the long run since it contradicts economic competition theory, causing an unfair pricing model and locking-in consumers to specific service providers. In this paper we assume the existence of a free competitive market, in which consumers are free to switch their services among providers. We assume that free market competition will enforce vendors to adopt open standards, improve the quality of their services and suggest a large variety of cloud services in all layers. Our model is aimed at the potential customer who wishes to find the optimal combination of service providers which minimizes his costs. We propose three possible strategies for implementation of the model in organizations. We formulate the mathematical model and illustrate its advantages compared to existing pricing practices used by cloud computing consumers.

  6. Conceptual Considerations for Reducing the Computational Complexity in Software Defined Radio using Cooperative Wireless Networks

    DEFF Research Database (Denmark)

    Kristensen, Jesper Michael; Fitzek, Frank H. P.; Koch, Peter

    2005-01-01

    This paper motivates the application of Software defined radio as the enabling technology in the implementation of future wireless terminals for 4G. It outlines the advantages and disadvantages of SDR in terms of Flexibility and reconfigurability versus computational complexity. To mitigate...... the expected increase in complexity leading to a decrease in energy efficiency, cooperative wireless networks are introduced. Cooperative wireless networks enables the concept of resource sharing. Resource sharing is interpreted as collaborative signal processing. This interpretation leads to the concept...

  7. Measuring human emotions with modular neural networks and computer vision based applications

    Directory of Open Access Journals (Sweden)

    Veaceslav Albu

    2015-05-01

    Full Text Available This paper describes a neural network architecture for emotion recognition for human-computer interfaces and applied systems. In the current research, we propose a combination of the most recent biometric techniques with the neural networks (NN approach for real-time emotion and behavioral analysis. The system will be tested in real-time applications of customers' behavior for distributed on-land systems, such as kiosks and ATMs.

  8. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems

    Science.gov (United States)

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D.

    2016-01-01

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems. PMID:27463718

  9. Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems.

    Science.gov (United States)

    Martins, Goncalo; Moondra, Arul; Dubey, Abhishek; Bhattacharjee, Anirban; Koutsoukos, Xenofon D

    2016-07-25

    In modern networked control applications, confidentiality and integrity are important features to address in order to prevent against attacks. Moreover, network control systems are a fundamental part of the communication components of current cyber-physical systems (e.g., automotive communications). Many networked control systems employ Time-Triggered (TT) architectures that provide mechanisms enabling the exchange of precise and synchronous messages. TT systems have computation and communication constraints, and with the aim to enable secure communications in the network, it is important to evaluate the computational and communication overhead of implementing secure communication mechanisms. This paper presents a comprehensive analysis and evaluation of the effects of adding a Hash-based Message Authentication (HMAC) to TT networked control systems. The contributions of the paper include (1) the analysis and experimental validation of the communication overhead, as well as a scalability analysis that utilizes the experimental result for both wired and wireless platforms and (2) an experimental evaluation of the computational overhead of HMAC based on a kernel-level Linux implementation. An automotive application is used as an example, and the results show that it is feasible to implement a secure communication mechanism without interfering with the existing automotive controller execution times. The methods and results of the paper can be used for evaluating the performance impact of security mechanisms and, thus, for the design of secure wired and wireless TT networked control systems.

  10. FALCON or how to compute measures time efficiently on dynamically evolving dense complex networks?

    Science.gov (United States)

    Franke, R; Ivanova, G

    2014-02-01

    A large number of topics in biology, medicine, neuroscience, psychology and sociology can be generally described via complex networks in order to investigate fundamental questions of structure, connectivity, information exchange and causality. Especially, research on biological networks like functional spatiotemporal brain activations and changes, caused by neuropsychiatric pathologies, is promising. Analyzing those so-called complex networks, the calculation of meaningful measures can be very long-winded depending on their size and structure. Even worse, in many labs only standard desktop computers are accessible to perform those calculations. Numerous investigations on complex networks regard huge but sparsely connected network structures, where most network nodes are connected to only a few others. Currently, there are several libraries available to tackle this kind of networks. A problem arises when not only a few big and sparse networks have to be analyzed, but hundreds or thousands of smaller and conceivably dense networks (e.g. in measuring brain activation over time). Then every minute per network is crucial. For these cases there several possibilities to use standard hardware more efficiently. It is not sufficient to apply just standard algorithms for dense graph characteristics. This article introduces the new library FALCON developed especially for the exploration of dense complex networks. Currently, it offers 12 different measures (like clustering coefficients), each for undirected-unweighted, undirected-weighted and directed-unweighted networks. It uses a multi-core approach in combination with comprehensive code and hardware optimizations. There is an alternative massively parallel GPU implementation for the most time-consuming measures, too. Finally, a comparing benchmark is integrated to support the choice of the most suitable library for a particular network issue. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Integrative analysis of many weighted co-expression networks using tensor computation.

    Directory of Open Access Journals (Sweden)

    Wenyuan Li

    2011-06-01

    Full Text Available The rapid accumulation of biological networks poses new challenges and calls for powerful integrative analysis tools. Most existing methods capable of simultaneously analyzing a large number of networks were primarily designed for unweighted networks, and cannot easily be extended to weighted networks. However, it is known that transforming weighted into unweighted networks by dichotomizing the edges of weighted networks with a threshold generally leads to information loss. We have developed a novel, tensor-based computational framework for mining recurrent heavy subgraphs in a large set of massive weighted networks. Specifically, we formulate the recurrent heavy subgraph identification problem as a heavy 3D subtensor discovery problem with sparse constraints. We describe an effective approach to solving this problem by designing a multi-stage, convex relaxation protocol, and a non-uniform edge sampling technique. We applied our method to 130 co-expression networks, and identified 11,394 recurrent heavy subgraphs, grouped into 2,810 families. We demonstrated that the identified subgraphs represent meaningful biological modules by validating against a large set of compiled biological knowledge bases. We also showed that the likelihood for a heavy subgraph to be meaningful increases significantly with its recurrence in multiple networks, highlighting the importance of the integrative approach to biological network analysis. Moreover, our approach based on weighted graphs detects many patterns that would be overlooked using unweighted graphs. In addition, we identified a large number of modules that occur predominately under specific phenotypes. This analysis resulted in a genome-wide mapping of gene network modules onto the phenome. Finally, by comparing module activities across many datasets, we discovered high-order dynamic cooperativeness in protein complex networks and transcriptional regulatory networks.

  12. Ptychographic X-ray computed tomography of extended colloidal networks in food emulsions

    DEFF Research Database (Denmark)

    Schou Nielsen, Mikkel; Bøgelund Munk, Merete; Diaz, Ana

    2016-01-01

    of suitable non-destructive 3D imaging techniques with submicron resolution. We present results of quantitative ptychographic X-ray computed tomography applied to a palm kernel oil based oil-in-water emulsion. The measurements were carried out at ambient pressure and temperature. The 3D structure...... of the extended colloidal network of fat globules was obtained with a resolution of around 300 nm. Through image analysis of the network structure, the fat globule size distribution was computed and compared to previous findings. In further support, the reconstructed electron density values were within 4...

  13. Simulation of worms transmission in computer network based on SIRS fuzzy epidemic model

    Science.gov (United States)

    Darti, I.; Suryanto, A.; Yustianingsih, M.

    2015-03-01

    In this paper we study numerically the behavior of worms transmission in a computer network. The model of worms transmission is derived by modifying a SIRS epidemic model. In this case, we consider that the transmission rate, recovery rate and rate of susceptible after recovery follows fuzzy membership functions, rather than constants. To study the transmission of worms in a computer network, we solve the model using the fourth order Runge-Kutta method. Our numerical results show that the fuzzy transmission rate and fuzzy recovery rate may lead to a changing of basic reproduction number which therefore also changes the stability properties of equilibrium points.

  14. Reducing Computational Overhead of Network Coding with Intrinsic Information Conveying

    DEFF Research Database (Denmark)

    Heide, Janus; Zhang, Qi; Pedersen, Morten V.

    This paper investigated the possibility of intrinsic information conveying in network coding systems. The information is embedded into the coding vector by constructing the vector based on a set of predefined rules. This information can subsequently be retrieved by any receiver. The starting point...... to the overall energy consumption, which is particular problematic for mobile battery-driven devices. In RLNC coding is performed over a FF (Finite Field). We propose to divide this field into sub fields, and let each sub field signify some information or state. In order to embed the information correctly...... the coding operations must be performed in a particular way, which we introduce. Finally we evaluate the suggested system and find that the amount of coding can be significantly reduced both at nodes that recode and decode....

  15. A Reconfigurable and Biologically Inspired Paradigm for Computation Using Network-On-Chip and Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Jim Harkin

    2009-01-01

    Full Text Available FPGA devices have emerged as a popular platform for the rapid prototyping of biological Spiking Neural Networks (SNNs applications, offering the key requirement of reconfigurability. However, FPGAs do not efficiently realise the biologically plausible neuron and synaptic models of SNNs, and current FPGA routing structures cannot accommodate the high levels of interneuron connectivity inherent in complex SNNs. This paper highlights and discusses the current challenges of implementing scalable SNNs on reconfigurable FPGAs. The paper proposes a novel field programmable neural network architecture (EMBRACE, incorporating low-power analogue spiking neurons, interconnected using a Network-on-Chip architecture. Results on the evaluation of the EMBRACE architecture using the XOR benchmark problem are presented, and the performance of the architecture is discussed. The paper also discusses the adaptability of the EMBRACE architecture in supporting fault tolerant computing.

  16. Integration of a network aware traffic generation device into a computer network emulation platform

    CSIR Research Space (South Africa)

    Von Solms, S

    2014-07-01

    Full Text Available aware traffic into the network emulation platform. Traffic generators are often systems that replay captured traffic packet-by-packet or generate traffic according to a specified model or preconfigured sequence. Many of these traffic generators can...

  17. Services Recommendation System based on Heterogeneous Network Analysis in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Junping Dong

    2014-04-01

    Full Text Available Resources are provided mainly in the form of services in cloud computing. In the distribute environment of cloud computing, how to find the needed services efficiently and accurately is the most urgent problem in cloud computing. In cloud computing, services are the intermediary of cloud platform, services are connected by lots of service providers and requesters and construct the complex heterogeneous network. The traditional recommendation systems only consider the functional and non-functional requirements of services but ignore the links between providers and requesters of service, which result to the service position is not accurate. Focus on the problems, this study intends to model the relationship of the cloud services participants with the format of heterogeneous information network, which intend to mine the hidden relationships between services participants in the cloud computing environment. In theoretical research, we proposed a cloud service heterogeneous network extraction and automatic maintenance model, proposed a new service recommendation system based on heterogeneous service network ranking and clustering.

  18. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    Science.gov (United States)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  19. Learning to design synergetic computers with an extended symmetric diffusion network.

    Science.gov (United States)

    Okuhara, K; Osaki, S; Kijima, M

    1999-08-15

    This article proposes an extended symmetric diffusion network that is applied to the design of synergetic computers. The state of a synergetic computer is translated to that of order parameters whose dynamics is described by a stochastic differential equation. The order parameter converges to the Boltzmann distribution, under some condition on the drift term, derived by the Fokker-Planck equation. The network can learn the dynamics of the order parameters from a nonlinear potential. This property is necessary to design the coefficient values of the synergetic computer. We propose a searching function for the image processing executed by the synergetic computer. It is shown that the image processing with the searching function is superior to the usual image-associative function of synergetic computation. The proposed network can be related, as a special case, to the discrete-state Boltzmann machine by some transformation. Finally, the extended symmetric diffusion network is applied to the estimation problem of an entire density function, as well as the proposed searching function for the image processing.

  20. FASIMU: flexible software for flux-balance computation series in large metabolic networks

    Directory of Open Access Journals (Sweden)

    Gille Christoph

    2011-01-01

    Full Text Available Abstract Background Flux-balance analysis based on linear optimization is widely used to compute metabolic fluxes in large metabolic networks and gains increasingly importance in network curation and structural analysis. Thus, a computational tool flexible enough to realize a wide variety of FBA algorithms and able to handle batch series of flux-balance optimizations is of great benefit. Results We present FASIMU, a command line oriented software for the computation of flux distributions using a variety of the most common FBA algorithms, including the first available implementation of (i weighted flux minimization, (ii fitness maximization for partially inhibited enzymes, and (iii of the concentration-based thermodynamic feasibility constraint. It allows batch computation with varying objectives and constraints suited for network pruning, leak analysis, flux-variability analysis, and systematic probing of metabolic objectives for network curation. Input and output supports SBML. FASIMU can work with free (lp_solve and GLPK or commercial solvers (CPLEX, LINDO. A new plugin (faBiNA for BiNA allows to conveniently visualize calculated flux distributions. The platform-independent program is an open-source project, freely available under GNU public license at http://www.bioinformatics.org/fasimu including manual, tutorial, and plugins. Conclusions We present a flux-balance optimization program whose main merits are the implementation of thermodynamics as a constraint, batch series of computations, free availability of sources, choice on various external solvers, and the flexibility on metabolic objectives and constraints.

  1. On the relevance of efficient, integrated computer and network monitoring in HEP distributed online environment

    CERN Document Server

    Carvalho, D F; Delgado, V; Albert, J N; Bellas, N; Javello, J; Miere, Y; Ruffinoni, D; Smith, G

    1996-01-01

    Large Scientific Equipments are controlled by Computer System whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, thhe sophistication of its trearment and, on the over hand by the fast evolution of the computer and network market. Some people call them generically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this frame- work the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is to integrate the various functions of DCCS monitoring into one general purpose Multi-layer ...

  2. On the Relevancy of Efficient, Integrated Computer and Network Monitoring in HEP Distributed Online Environment

    Science.gov (United States)

    Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.

    Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.

  3. A Comprehensive Review on Adaptability of Network Forensics Frameworks for Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Suleman Khan

    2014-01-01

    Full Text Available Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC.

  4. Evolutionary Game Analysis of Competitive Information Dissemination on Social Networks: An Agent-Based Computational Approach

    Directory of Open Access Journals (Sweden)

    Qing Sun

    2015-01-01

    Full Text Available Social networks are formed by individuals, in which personalities, utility functions, and interaction rules are made as close to reality as possible. Taking the competitive product-related information as a case, we proposed a game-theoretic model for competitive information dissemination in social networks. The model is presented to explain how human factors impact competitive information dissemination which is described as the dynamic of a coordination game and players’ payoff is defined by a utility function. Then we design a computational system that integrates the agent, the evolutionary game, and the social network. The approach can help to visualize the evolution of % of competitive information adoption and diffusion, grasp the dynamic evolution features in information adoption game over time, and explore microlevel interactions among users in different network structure under various scenarios. We discuss several scenarios to analyze the influence of several factors on the dissemination of competitive information, ranging from personality of individuals to structure of networks.

  5. Computational modeling of the dependence of kindling rate on network properties

    Science.gov (United States)

    Biswal, B.; Niranjan, B. R.; Ullal, G.; Dasgupta, C.

    2006-05-01

    The dependence of the rate of kindling on network properties, such as the number of neurons, number of stored memories, and the number of neurons used to store each memory, is studied through computer simulations of an appropriate neural network model for kindling of focal epilepsy. Simulations are performed for models of both chemical and electrical kindling. Larger and more complex networks are found to take longer time to kindle, as observed in experiments. The nature of the dependence of the kindling rate on network properties is somewhat different between the two types of kindling. A simple analysis of the process of chemical kindling is presented, which provides a semi-quantitative explanation of the behavior observed in our simulations. This analysis also shows that our main conclusions about the dependence of the kindling rate on the size and complexity of the network are independent of some of the assumptions made in our modeling.

  6. Single-Board-Computer-Based Traffic Generator for a Heterogeneous and Hybrid Smart Grid Communication Network

    Directory of Open Access Journals (Sweden)

    Do Nguyet Quang

    2014-02-01

    Full Text Available In smart grid communication implementation, network traffic pattern is one of the main factors that affect the system’s performance. Examining different traffic patterns in smart grid is therefore crucial when analyzing the network performance. Due to the heterogeneous and hybrid nature of smart grid, the type of traffic distribution in the network is still unknown. The traffic that popularly used for simulation and analysis no longer reflects the real traffic in a multi-technology and bi-directional communication system. Hence, in this study, a single-board computer is implemented as a traffic generator which can generate network traffic similar to those generated by various applications in the fully operational smart grid. By placing in a strategic and appropriate position, a collection of traffic generators allow network administrators to investigate and test the effect of heavy traffic on performance of smart grid communication system.

  7. Computing and Routing for Trust in Structured P2P Network

    Directory of Open Access Journals (Sweden)

    Biao Cai

    2009-09-01

    Full Text Available Study of trust in P2P network now is focus on how to effectively against various malicious behaviors such as providing fake or misleading feedback about other peers and the management of trust in a P2P environment. But the scotoma of portability that trust peer can join (leave a certain P2P network at anytime and anywhere is seldom discussed. In this paper, a structured topology for trusts management in portable P2P network based on DHT (discrete hash table is proposed first, in which includes trust management strategies and peer operations on certain DHT circle. After that, a novel trust-computing model for the structured P2P network and the main trust decisions in the structured network are introduced too. Effectiveness and practicality of the proposed trust management have been showed in simulation experiments at the end.

  8. Dispatching packets on a global combining network of a parallel computer

    Science.gov (United States)

    Almasi, Gheorghe; Archer, Charles J.

    2011-07-19

    Methods, apparatus, and products are disclosed for dispatching packets on a global combining network of a parallel computer comprising a plurality of nodes connected for data communications using the network capable of performing collective operations and point to point operations that include: receiving, by an origin system messaging module on an origin node from an origin application messaging module on the origin node, a storage identifier and an operation identifier, the storage identifier specifying storage containing an application message for transmission to a target node, and the operation identifier specifying a message passing operation; packetizing, by the origin system messaging module, the application message into network packets for transmission to the target node, each network packet specifying the operation identifier and an operation type for the message passing operation specified by the operation identifier; and transmitting, by the origin system messaging module, the network packets to the target node.

  9. A comprehensive review on adaptability of network forensics frameworks for mobile cloud computing.

    Science.gov (United States)

    Khan, Suleman; Shiraz, Muhammad; Wahab, Ainuddin Wahid Abdul; Gani, Abdullah; Han, Qi; Rahman, Zulkanain Bin Abdul

    2014-01-01

    Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC.

  10. A Newly Developed Method for Computing Reliability Measures in a Water Supply Network

    Directory of Open Access Journals (Sweden)

    Jacek Malinowski

    2016-01-01

    Full Text Available A reliability model of a water supply network has beens examined. Its main features are: a topology that can be decomposed by the so-called state factorization into a (relativelysmall number of derivative networks, each having a series-parallel structure (1, binary-state components (either operative or failed with given flow capacities (2, a multi-state character of the whole network and its sub-networks - a network state is defined as the maximal flow between a source (sources and a sink (sinks (3, all capacities (component, network, and sub-network have integer values (4. As the network operates, its state changes due to component failures, repairs, and replacements. A newly developed method of computing the inter-state transition intensities has been presented. It is based on the so-called state factorization and series-parallel aggregation. The analysis of these intensities shows that the failure-repair process of the considered system is an asymptotically homogenous Markov process. It is also demonstrated how certain reliability parameters useful for the network maintenance planning can be determined on the basis of the asymptotic intensities. For better understanding of the presented method, an illustrative example is given. (original abstract

  11. Network-based drug discovery by integrating systems biology and computational technologies.

    Science.gov (United States)

    Leung, Elaine L; Cao, Zhi-Wei; Jiang, Zhi-Hong; Zhou, Hua; Liu, Liang

    2013-07-01

    Network-based intervention has been a trend of curing systemic diseases, but it relies on regimen optimization and valid multi-target actions of the drugs. The complex multi-component nature of medicinal herbs may serve as valuable resources for network-based multi-target drug discovery due to its potential treatment effects by synergy. Recently, robustness of multiple systems biology platforms shows powerful to uncover molecular mechanisms and connections between the drugs and their targeting dynamic network. However, optimization methods of drug combination are insufficient, owning to lacking of tighter integration across multiple '-omics' databases. The newly developed algorithm- or network-based computational models can tightly integrate '-omics' databases and optimize combinational regimens of drug development, which encourage using medicinal herbs to develop into new wave of network-based multi-target drugs. However, challenges on further integration across the databases of medicinal herbs with multiple system biology platforms for multi-target drug optimization remain to the uncertain reliability of individual data sets, width and depth and degree of standardization of herbal medicine. Standardization of the methodology and terminology of multiple system biology and herbal database would facilitate the integration. Enhance public accessible databases and the number of research using system biology platform on herbal medicine would be helpful. Further integration across various '-omics' platforms and computational tools would accelerate development of network-based drug discovery and network medicine.

  12. Hybrid Computation Model for Intelligent System Design by Synergism of Modified EFC with Neural Network

    OpenAIRE

    2015-01-01

    In recent past, it has been seen in many applications that synergism of computational intelligence techniques outperforms over an individual technique. This paper proposes a new hybrid computation model which is a novel synergism of modified evolutionary fuzzy clustering with associated neural networks. It consists of two modules: fuzzy distribution and neural classifier. In first module, mean patterns are distributed into the number of clusters based on the modified evolutionary fuzzy cluste...

  13. A computational-grid based system for continental drainage network extraction using SRTM digital elevation models

    Science.gov (United States)

    Curkendall, David W.; Fielding, Eric J.; Pohl, Josef M.; Cheng, Tsan-Huei

    2003-01-01

    We describe a new effort for the computation of elevation derivatives using the Shuttle Radar Topography Mission (SRTM) results. Jet Propulsion Laboratory's (JPL) SRTM has produced a near global database of highly accurate elevation data. The scope of this database enables computing precise stream drainage maps and other derivatives on Continental scales. We describe a computing architecture for this computationally very complex task based on NASA's Information Power Grid (IPG), a distributed high performance computing network based on the GLOBUS infrastructure. The SRTM data characteristics and unique problems they present are discussed. A new algorithm for organizing the conventional extraction algorithms [1] into a cooperating parallel grid is presented as an essential component to adapt to the IPG computing structure. Preliminary results are presented for a Southern California test area, established for comparing SRTM and its results against those produced using the USGS National Elevation Data (NED) model.

  14. Steady state analysis of Boolean molecular network models via model reduction and computational algebra

    Science.gov (United States)

    2014-01-01

    Background A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. Results This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. Conclusions The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate

  15. Steady state analysis of Boolean molecular network models via model reduction and computational algebra.

    Science.gov (United States)

    Veliz-Cuba, Alan; Aguilar, Boris; Hinkelmann, Franziska; Laubenbacher, Reinhard

    2014-06-26

    A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for

  16. Representing spatial information in a computational model for network management

    Science.gov (United States)

    Blaisdell, James H.; Brownfield, Thomas F.

    1994-01-01

    While currently available relational database management systems (RDBMS) allow inclusion of spatial information in a data model, they lack tools for presenting this information in an easily comprehensible form. Computer-aided design (CAD) software packages provide adequate functions to produce drawings, but still require manual placement of symbols and features. This project has demonstrated a bridge between the data model of an RDBMS and the graphic display of a CAD system. It is shown that the CAD system can be used to control the selection of data with spatial components from the database and then quickly plot that data on a map display. It is shown that the CAD system can be used to extract data from a drawing and then control the insertion of that data into the database. These demonstrations were successful in a test environment that incorporated many features of known working environments, suggesting that the techniques developed could be adapted for practical use.

  17. Configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks

    Science.gov (United States)

    Archer, Charles J.; Inglett, Todd A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-03-02

    Methods, apparatus, and products are disclosed for configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks, the compute nodes in the operational group connected together for data communications through a global combining network, that include: partitioning the compute nodes in the operational group into a plurality of non-overlapping subgroups; designating one compute node from each of the non-overlapping subgroups as a master node; and assigning, to the compute nodes in each of the non-overlapping subgroups, class routing instructions that organize the compute nodes in that non-overlapping subgroup as a collective network such that the master node is a physical root.

  18. Computing Smallest Intervention Strategies for Multiple Metabolic Networks in a Boolean Model

    Science.gov (United States)

    Lu, Wei; Song, Jiangning; Akutsu, Tatsuya

    2015-01-01

    Abstract This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online. PMID:25684199

  19. Computing smallest intervention strategies for multiple metabolic networks in a boolean model.

    Science.gov (United States)

    Lu, Wei; Tamura, Takeyuki; Song, Jiangning; Akutsu, Tatsuya

    2015-02-01

    This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online.

  20. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    Directory of Open Access Journals (Sweden)

    Qian Li

    Full Text Available BACKGROUND: Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. METHODOLOGY: We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671 between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. CONCLUSIONS: This article proposes a network-based multi-target computational estimation

  1. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    Science.gov (United States)

    Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-03-22

    Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by

  2. Advancing the boundaries of high-connectivity network simulation with distributed computing.

    Science.gov (United States)

    Morrison, Abigail; Mehring, Carsten; Geisel, Theo; Aertsen, A D; Diesmann, Markus

    2005-08-01

    The availability of efficient and reliable simulation tools is one of the mission-critical technologies in the fast-moving field of computational neuroscience. Research indicates that higher brain functions emerge from large and complex cortical networks and their interactions. The large number of elements (neurons) combined with the high connectivity (synapses) of the biological network and the specific type of interactions impose severe constraints on the explorable system size that previously have been hard to overcome. Here we present a collection of new techniques combined to a coherent simulation tool removing the fundamental obstacle in the computational study of biological neural networks: the enormous number of synaptic contacts per neuron. Distributing an individual simulation over multiple computers enables the investigation of networks orders of magnitude larger than previously possible. The software scales excellently on a wide range of tested hardware, so it can be used in an interactive and iterative fashion for the development of ideas, and results can be produced quickly even for very large networks. In contrast to earlier approaches, a wide class of neuron models and synaptic dynamics can be represented.

  3. Prediction and Assessment of Student Behaviour in Open and Distance Education in Computers Using Bayesian Networks

    Science.gov (United States)

    Xenos, Michalis

    2004-01-01

    This paper presents a methodological approach based on Bayesian Networks for modelling the behaviour of the students of a bachelor course in computers in an Open University that deploys distance educational methods. It describes the structure of the model, its application for modelling the behaviour of student groups in the Informatics Course of…

  4. POSTER: Privacy-Preserving Profile Similarity Computation in Online Social Networks

    NARCIS (Netherlands)

    Jeckmans, Arjan; Tang, Qiang; Hartel, Pieter

    2011-01-01

    Currently, none of the existing online social networks (OSNs) enables its users to make new friends without revealing their private information. This leaves the users in a vulnerable position when searching for new friends. We propose a solution which enables a user to compute her profile similarity

  5. Data systems and computer science space data systems: Onboard networking and testbeds

    Science.gov (United States)

    Dalton, Dan

    1991-01-01

    The technical objectives are to develop high-performance, space-qualifiable, onboard computing, storage, and networking technologies. The topics are presented in viewgraph form and include the following: justification; technology challenges; program description; and state-of-the-art assessment.

  6. Stochastic data-flow graph models for the reliability analysis of communication networks and computer systems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, D.J.

    1988-01-01

    The literature is abundant with combinatorial reliability analysis of communication networks and fault-tolerant computer systems. However, it is very difficult to formulate reliability indexes using combinatorial methods. These limitations have led to the development of time-dependent reliability analysis using stochastic processes. In this research, time-dependent reliability-analysis techniques using Dataflow Graphs (DGF) are developed. The chief advantages of DFG models over other models are their compactness, structural correspondence with the systems, and general amenability to direct interpretation. This makes the verification of the correspondence of the data-flow graph representation to the actual system possible. Several DGF models are developed and used to analyze the reliability of communication networks and computer systems. Specifically, Stochastic Dataflow graphs (SDFG), both the discrete-time and the continuous time models are developed and used to compute time-dependent reliability of communication networks and computer systems. The repair and coverage phenomenon of communication networks is also analyzed using SDFG models.

  7. Computer Model of a "Sense of Humour". II. Realization in Neural Networks

    CERN Document Server

    Suslov, I M

    1992-01-01

    The computer realization of a "sense of humour" requires the creation of an algorithm for solving the "linguistic problem", i.e. the problem of recognizing a continuous sequence of polysemantic images. Such algorithm may be realized in the Hopfield model of a neural network after its proper modification.

  8. Characterization of computer network events through simultaneous feature selection and clustering of intrusion alerts

    Science.gov (United States)

    Chen, Siyue; Leung, Henry; Dondo, Maxwell

    2014-05-01

    As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.

  9. Back-of-the-Envelope Computation of Throughput Distributions in CSMA Wireless Networks

    CERN Document Server

    Liew, S C; Leung, J; Wong, B

    2007-01-01

    This work started out with our accidental discovery of a pattern of throughput distributions among links in IEEE 802.11 networks from experimental results. This pattern gives rise to an easy computation method, which we term back-of-the-envelop (BoE) computation, because for many network configurations, very accurate results can be obtained within minutes, if not seconds, by simple hand computation. BoE beats prior methods in terms of both speed and accuracy. While the computation procedure of BoE is simple, explaining why it works is by no means trivial. Indeed the majority of our investigative efforts have been devoted to the construction of a theory to explain BoE. This paper models an ideal CSMA network as a set of interacting on-off telegraph processes. In developing the theory, we discovered a number of analytical techniques and observations that have eluded prior research, such as that the carrier-sensing interactions among links in an ideal CSMA network result in a system state evolution that is time-...

  10. 2nd FTRA International Conference on Ubiquitous Computing Application and Wireless Sensor Network

    CERN Document Server

    Pan, Yi; Chao, Han-Chieh; Yi, Gangman

    2015-01-01

    IT changes everyday’s life, especially in education and medicine. The goal of ITME 2014 is to further explore the theoretical and practical issues of Ubiquitous Computing Application and Wireless Sensor Network. It also aims to foster new ideas and collaboration between researchers and practitioners. The organizing committee is soliciting unpublished papers for the main conference and its special tracks.

  11. Systematic Approach to Computational Design of Gene Regulatory Networks with Information Processing Capabilities.

    Science.gov (United States)

    Moskon, Miha; Mraz, Miha

    2014-01-01

    We present several measures that can be used in de novo computational design of biological systems with information processing capabilities. Their main purpose is to objectively evaluate the behavior and identify the biological information processing structures with the best dynamical properties. They can be used to define constraints that allow one to simplify the design of more complex biological systems. These measures can be applied to existent computational design approaches in synthetic biology, i.e., rational and automatic design approaches. We demonstrate their use on a) the computational models of several basic information processing structures implemented with gene regulatory networks and b) on a modular design of a synchronous toggle switch.

  12. Vascular Dynamics Aid a Coupled Neurovascular Network Learn Sparse Independent Features: A Computational Model.

    Science.gov (United States)

    Philips, Ryan T; Chhabria, Karishma; Chakravarthy, V Srinivasa

    2016-01-01

    Cerebral vascular dynamics are generally thought to be controlled by neural activity in a unidirectional fashion. However, both computational modeling and experimental evidence point to the feedback effects of vascular dynamics on neural activity. Vascular feedback in the form of glucose and oxygen controls neuronal ATP, either directly or via the agency of astrocytes, which in turn modulates neural firing. Recently, a detailed model of the neuron-astrocyte-vessel system has shown how vasomotion can modulate neural firing. Similarly, arguing from known cerebrovascular physiology, an approach known as "hemoneural hypothesis" postulates functional modulation of neural activity by vascular feedback. To instantiate this perspective, we present a computational model in which a network of "vascular units" supplies energy to a neural network. The complex dynamics of the vascular network, modeled by a network of oscillators, turns neurons ON and OFF randomly. The informational consequence of such dynamics is explored in the context of an auto-encoder network. In the proposed model, each vascular unit supplies energy to a subset of hidden neurons of an autoencoder network, which constitutes its "projective field." Neurons that receive adequate energy in a given trial have reduced threshold, and thus are prone to fire. Dynamics of the vascular network are governed by changes in the reconstruction error of the auto-encoder network, interpreted as the neuronal demand. Vascular feedback causes random inactivation of a subset of hidden neurons in every trial. We observe that, under conditions of desynchronized vascular dynamics, the output reconstruction error is low and the feature vectors learnt are sparse and independent. Our earlier modeling study highlighted the link between desynchronized vascular dynamics and efficient energy delivery in skeletal muscle. We now show that desynchronized vascular dynamics leads to efficient training in an auto-encoder neural network.

  13. Computing the Local Field Potential (LFP from Integrate-and-Fire Network Models.

    Directory of Open Access Journals (Sweden)

    Alberto Mazzoni

    2015-12-01

    Full Text Available Leaky integrate-and-fire (LIF network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP. Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

  14. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models.

    Science.gov (United States)

    Mazzoni, Alberto; Lindén, Henrik; Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T

    2015-12-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

  15. Representing and computing regular languages on massively parallel networks

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M.I.; O' Sullivan, J.A. (Electronic Systems and Research Lab., of Electrical Engineering, Washington Univ., St. Louis, MO (US)); Boysam, B. (Dept. of Electrical, Computer and Systems Engineering, Rensselaer Polytechnic Inst., Troy, NY (US)); Smith, K.R. (Dept. of Electrical Engineering, Southern Illinois Univ., Edwardsville, IL (US))

    1991-01-01

    This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochastic diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.

  16. Abstracting massive data for lightweight intrusion detection in computer networks

    KAUST Repository

    Wang, Wei

    2016-10-15

    Anomaly intrusion detection in big data environments calls for lightweight models that are able to achieve real-time performance during detection. Abstracting audit data provides a solution to improve the efficiency of data processing in intrusion detection. Data abstraction refers to abstract or extract the most relevant information from the massive dataset. In this work, we propose three strategies of data abstraction, namely, exemplar extraction, attribute selection and attribute abstraction. We first propose an effective method called exemplar extraction to extract representative subsets from the original massive data prior to building the detection models. Two clustering algorithms, Affinity Propagation (AP) and traditional . k-means, are employed to find the exemplars from the audit data. . k-Nearest Neighbor (k-NN), Principal Component Analysis (PCA) and one-class Support Vector Machine (SVM) are used for the detection. We then employ another two strategies, attribute selection and attribute extraction, to abstract audit data for anomaly intrusion detection. Two http streams collected from a real computing environment as well as the KDD\\'99 benchmark data set are used to validate these three strategies of data abstraction. The comprehensive experimental results show that while all the three strategies improve the detection efficiency, the AP-based exemplar extraction achieves the best performance of data abstraction.

  17. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    Science.gov (United States)

    Cui, Zhenqian

    1999-01-01

    With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.

  18. The ASCI Network for SC '98: Dense Wave Division Multiplexing for Distributed and Distance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Adams, R.L.; Butman, W.; Martinez, L.G.; Pratt, T.J.; Vahle, M.O.

    1999-06-01

    This document highlights the DISCOM's Distance computing and communication team activities at the 1998 Supercomputing conference in Orlando, Florida. This conference is sponsored by the IEEE and ACM. Sandia National Laboratories, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory have participated in this conference for ten years. For the last three years, the three laboratories have a joint booth at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives. The DISCOM communication team uses the forum to demonstrate and focus communications and networking developments. At SC '98, DISCOM demonstrated the capabilities of Dense Wave Division Multiplexing. We exhibited an OC48 ATM encryptor. We also coordinated the other networking activities within the booth. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support overall strategies in ATM networking.

  19. Optimal control strategy for a novel computer virus propagation model on scale-free networks

    Science.gov (United States)

    Zhang, Chunming; Huang, Haitao

    2016-06-01

    This paper aims to study the combined impact of reinstalling system and network topology on the spread of computer viruses over the Internet. Based on scale-free network, this paper proposes a novel computer viruses propagation model-SLBOSmodel. A systematic analysis of this new model shows that the virus-free equilibrium is globally asymptotically stable when its spreading threshold is less than one; nevertheless, it is proved that the viral equilibrium is permanent if the spreading threshold is greater than one. Then, the impacts of different model parameters on spreading threshold are analyzed. Next, an optimally controlled SLBOS epidemic model on complex networks is also studied. We prove that there is an optimal control existing for the control problem. Some numerical simulations are finally given to illustrate the main results.

  20. Spike-timing computation properties of a feed-forward neural network model

    Directory of Open Access Journals (Sweden)

    Drew Benjamin Sinha

    2014-01-01

    Full Text Available Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g. serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.

  1. Network Pharmacology Strategies Toward Multi-Target Anticancer Therapies: From Computational Models to Experimental Design Principles

    Science.gov (United States)

    Tang, Jing; Aittokallio, Tero

    2014-01-01

    Polypharmacology has emerged as novel means in drug discovery for improving treatment response in clinical use. However, to really capitalize on the polypharmacological effects of drugs, there is a critical need to better model and understand how the complex interactions between drugs and their cellular targets contribute to drug efficacy and possible side effects. Network graphs provide a convenient modeling framework for dealing with the fact that most drugs act on cellular systems through targeting multiple proteins both through on-target and off-target binding. Network pharmacology models aim at addressing questions such as how and where in the disease network should one target to inhibit disease phenotypes, such as cancer growth, ideally leading to therapies that are less vulnerable to drug resistance and side effects by means of attacking the disease network at the systems level through synergistic and synthetic lethal interactions. Since the exponentially increasing number of potential drug target combinations makes pure experimental approach quickly unfeasible, this review depicts a number of computational models and algorithms that can effectively reduce the search space for determining the most promising combinations for experimental evaluation. Such computational-experimental strategies are geared toward realizing the full potential of multi-target treatments in different disease phenotypes. Our specific focus is on system-level network approaches to polypharmacology designs in anticancer drug discovery, where we give representative examples of how network-centric modeling may offer systematic strategies toward better understanding and even predicting the phenotypic responses to multi-target therapies.

  2. Computational Analysis of Topological Survivability of Large-Scale Engineering Networks with Heterogeneous Nodes

    Science.gov (United States)

    Poroseva, Svetlana

    2012-02-01

    The scale and complexity of modern networks, their integration, and the size of population and businesses they have impact on, make their massive damage catastrophic for the society and economy. Such damage is usually caused by adverse events and is not considered by traditional design practices. In the modern society, the likelihood of adverse events has substantially increased. Therefore, there is a need in evaluating the ability of a network to survive such damage. As the network topology is a key factor to consider, the goal of our research is to develop computational tools for quantifying its effect on the network survivability. ``Selfish'' algorithm will be presented that addresses exponential-time complexity associated with the problem of generation and analysis of all fault combinations possible in a given network. The reduction of computational complexity is achieved by mapping an initial network topology with multiple sources and sinks onto a set of simpler smaller topologies with multiple sources and a single sink. Application to the Texas power grid will be considered.

  3. The spread of computer viruses over a reduced scale-free network

    Science.gov (United States)

    Yang, Lu-Xing; Yang, Xiaofan

    2014-02-01

    Due to the high dimensionality of an epidemic model of computer viruses over a general scale-free network, it is difficult to make a close study of its dynamics. In particular, it is extremely difficult, if not impossible, to prove the global stability of its viral equilibrium, if any. To overcome this difficulty, we suggest to simplify a general scale-free network by partitioning all of its nodes into two classes: higher-degree nodes and lower-degree nodes, and then equating the degrees of all higher-degree nodes and all lower-degree nodes, respectively, yielding a reduced scale-free network. We then propose an epidemic model of computer viruses over a reduced scale-free network. A theoretical analysis reveals that the proposed model is bound to have a globally stable viral equilibrium, implying that any attempt to eradicate network viruses would prove unavailing. As a result, the next best thing we can do is to restrain virus prevalence. Based on an analysis of the impact of different model parameters on virus prevalence, some practicable measures are recommended to contain virus spreading. The work in this paper adequately justifies the idea of reduced scale-free networks.

  4. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    Science.gov (United States)

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-01

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  5. MAGMA: A Liquid Software Approach to Fault Tolerance, Computer Network Security, and Survivable Networking

    Science.gov (United States)

    2001-12-01

    page traffic), and self-similar (which provides a more realistic packet model for total traffic generation and loads of a network segement [an...SAAM Region is designed to contain up to 40 nodes. With each of these nodes spread out over a vast geographical area, there needs to be a centralized

  6. Mobile cloud networking: mobile network, compute, and storage as one service on-demand

    NARCIS (Netherlands)

    Jamakovic, Almerima; Bohnert, Thomas Michael; Karagiannis, Georgios; Galis, A.; Gavras, A.

    2013-01-01

    The Future Communication Architecture for Mobile Cloud Services: Mobile Cloud Networking (MCN)1 is a EU FP7 Large scale Integrating Project (IP) funded by the European Commission. MCN project was launched in November 2012 for the period of 36 month. In total top-tier 19 partners from industry and ac

  7. Zaštita računarskih mreža / Protection of computer networks

    Directory of Open Access Journals (Sweden)

    Milojko Jevtović

    2005-09-01

    Full Text Available U radu su obrađene metode napada, oblici ugrožavanja i vrste pretnji kojima su izložene računarske mreže, kao i moguće metode i tehnička rešenja za zaštitu mreža. Analizirani su efekti pretnji kojima mogu biti izložene računarske mreže i informacije koje se preko njih prenose. Opisana su određena tehnička rešenja koja obezbeđuju potreban nivo zaštite računarskih mreža, kao i mere za zaštitu informacija koje se preko njih prenose. Navedeni su standardi koji se odnose na metode i procedure kriptozaštite informacija u računarskim mrežama. U radu je naveden primer zaštite jedne lokalne računarske mreže. / In this paper different methods of attacks, threats and different forms of dangers to the computer networks are described. The possible models and technical solutions for networks protection are also given. The effects of threats directed to the computer networks and their information are analyzed certain technical solutions that provide necessary protection level of the computer networks as well as measures for information protection are also described. The standards for methods and security procedure for the information in computer networks are enlisted. There is also an example of protecting one local data network (in this paper.

  8. Model of Quantum Computing in the Cloud: The Relativistic Vision Applied in Corporate Networks

    Directory of Open Access Journals (Sweden)

    Chau Sen Shia

    2016-08-01

    Full Text Available Cloud computing has is one of the subjects of interest to information technology professionals and to organizations when the subject covers financial economics and return on investment for companies. This work aims to present as a contribution proposing a model of quantum computing in the cloud using the relativistic physics concepts and foundations of quantum mechanics to propose a new vision in the use of virtualization environment in corporate networks. The model was based on simulation and testing of connection with providers in virtualization environments with Datacenters and implementing the basics of relativity and quantum mechanics in communication with networks of companies, to establish alliances and resource sharing between the organizations. The data were collected and then were performed calculations that demonstrate and identify connections and integrations that establish relations of cloud computing with the relativistic vision, in such a way that complement the approaches of physics and computing with the theories of the magnetic field and the propagation of light. The research is characterized as exploratory, because searches check physical connections with cloud computing, the network of companies and the adhesion of the proposed model. Were presented the relationship between the proposal and the practical application that makes it possible to describe the results of the main features, demonstrating the relativistic model integration with new technologies of virtualization of Datacenters, and optimize the resource with the propagation of light, electromagnetic waves, simultaneity, length contraction and time dilation.

  9. The significance and modalities of internet abuse as the primary global communication computer networks in cyberspace

    Directory of Open Access Journals (Sweden)

    Matijašević-Obradović Jelena

    2014-01-01

    Full Text Available Along with the rapid development of computers, computer networks have also been developed. Computer crimes are carried out in a specific environment - cyberspace, whose important characteristic is transnational scope, which goes beyond the control of the territorial nation-states. There is no doubt that computer networks can be subject of various abuses. They appear in a triple role: as a target or object of the attack, as a means or a tool, or as a framework of the offense. This kind of crime rapidly changes forms and forms of manifestation, the border between the states and the other injured. The biggest problem in this area is the misuse of the Internet, as the main global computer network communication, which has fundamentally revolutionized many areas of human life and work. The subject of this paperwork is analyzing the most important modalities of Internet abuse, as well as the scope and importance of using the Internet in a modern society. A separate section is dedicated to case studies of judicial authorities in Republic of Serbia.

  10. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  11. Neural network-based computer-aided diagnosis in distinguishing malignant from benign solitary pulmonary nodules by computed tomography

    Institute of Scientific and Technical Information of China (English)

    CHEN Hui; WANG Xiao-hua; MA Da-qing; MA Bin-rong

    2007-01-01

    Background Computer-aided diagnosis (CAD) of lung cancer is the subject of many current researches. Statistical methods and artificial neural networks have been applied to more quantitatively characterize solitary pulmonary nodules (SPNs). In this study, we developed a CAD scheme based on an artificial neural network to distinguish malignant from benign SPNs on thin-section computed tomography (CT) images, and investigated how the CAD scheme can help radiologists with different levels of experience make diagnostic decisions.Methods Two hundred thin-section CT images of SPNs with proven diagnoses (135 small peripheral lung cancers and 65 benign nodules) were analyzed. Three clinical features and nine CT signs of each case were studied by radiologists,and the indices of qualitative diagnosis were quantified. One hundred and forty nodules were selected randomly to form training samples, on which the neural network model was built. The remaining 60 nodules, forming test samples, were presented to 9 radiologists with 3-20 years of clinical experience, accompanied by standard reference images. The radiologists were asked to determine whether a nodule was malignant or benign first without and then with CAD output.Diagnostic performance was evaluated by receiver operating characteristic (ROC) analysis.Results CAD outputs on test samples had higher agreement with pathological diagnoses (Kappa=0.841, P<0.001).Compared with diagnostic results without CAD output, the average area under the ROC curve with CAD output was 0.96(P<0.001) for junior radiologists, 0.94 (P=0.014) for secondary radiologists and 0.96 (P=0.221) for senior radiologists,respectively. The differences in diagnostic performance with CAD output among the three levels of radiologists were not statistically significant (P=0.584, 0.920 and 0.707, respectively).Conclusions This CAD scheme based on an artificial neural network could improve diagnostic performance and assist radiologists in distinguishing

  12. New trends in networking, computing, e-learning, systems sciences, and engineering

    CERN Document Server

    Sobh, Tarek

    2015-01-01

    This book includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Informatics, and Systems Sciences, and Engineering. It includes selected papers form the conference proceedings of the Ninth International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2013). Coverage includes topics in: Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.  • Provides the latest in a series of books growing out of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering; • Includes chapters in the most advanced areas of Computing, Informatics, Systems Sciences, and Engineering; • Accessible to a wide range of readership, including professors, researchers, practitioners and...

  13. MEDUSA - An overset grid flow solver for network-based parallel computer systems

    Science.gov (United States)

    Smith, Merritt H.; Pallis, Jani M.

    1993-01-01

    Continuing improvement in processing speed has made it feasible to solve the Reynolds-Averaged Navier-Stokes equations for simple three-dimensional flows on advanced workstations. Combining multiple workstations into a network-based heterogeneous parallel computer allows the application of programming principles learned on MIMD (Multiple Instruction Multiple Data) distributed memory parallel computers to the solution of larger problems. An overset-grid flow solution code has been developed which uses a cluster of workstations as a network-based parallel computer. Inter-process communication is provided by the Parallel Virtual Machine (PVM) software. Solution speed equivalent to one-third of a Cray-YMP processor has been achieved from a cluster of nine commonly used engineering workstation processors. Load imbalance and communication overhead are the principal impediments to parallel efficiency in this application.

  14. Computational intelligence in wireless sensor networks recent advances and future challenges

    CERN Document Server

    Falcon, Rafael; Koeppen, Mario

    2017-01-01

    This book emphasizes the increasingly important role that Computational Intelligence (CI) methods are playing in solving a myriad of entangled Wireless Sensor Networks (WSN) related problems. The book serves as a guide for surveying several state-of-the-art WSN scenarios in which CI approaches have been employed. The reader finds in this book how CI has contributed to solve a wide range of challenging problems, ranging from balancing the cost and accuracy of heterogeneous sensor deployments to recovering from real-time sensor failures to detecting attacks launched by malicious sensor nodes and enacting CI-based security schemes. Network managers, industry experts, academicians and practitioners alike (mostly in computer engineering, computer science or applied mathematics) benefit from the spectrum of successful applications reported in this book. Senior undergraduate or graduate students may discover in this book some problems well suited for their own research endeavors. USP: Presents recent advances and fu...

  15. The ACOnet (Austrian Academic Computer Network) is data carrier for teleradiological consultations; Das ACOnet (Austrian Academic Computer Network) als Datentraeger fuer teleradiologische Konsultationen

    Energy Technology Data Exchange (ETDEWEB)

    Giacomuzzi, S.M. [Universitaetsklinik fuer Radiodiagnostik, Innsbruck (Austria)]|[Universitaetsklinik Innsbruck (Austria). Inst. fuer Medizinische Physik; Springer, P.; Dessl, A.; Waldenberger, P.; Buchberger, W.; Bodner, G.; Bale, R.; Jaschke, W. [Universitaetsklinik fuer Radiodiagnostik, Innsbruck (Austria); Stoeger, A. [Universitaetsklinik Innsbruck (Austria). Inst. fuer MRI; Schreder, J.G. [Universitaetsklinik Innsbruck (Austria). Inst. fuer Medizinische Physik; Gell, G. [Universitaetsklinik Innsbruck (Austria). Inst. fuer Medizinische Informatik

    1998-04-01

    Purpose: To assess the feasibility of image transfer for teleradiologic consultations using the Austrian Academic Computer Network (ACOnet). The ACOnet corresponds between the main univerisities to a MAN (Metropolitan Area Network) with a transfer rate of 4 Mbps. Its use is free of charge for university institutions. Materials and methods: 1740 test image data sets and 620 image data sets for teleradiological consultations were exchanged without annotations between the Departments of Diagnostic Radiology of the universities of Innsbruck and Graz, using the ACOnet. Results: Data transmission was reliable and fast with an average transfer capacity of 170.2 kBytes/s (94-341 kBytes/s). There were no major problems with image transfer during the test phase. Conclusion: Due to its high transfer capacity, the ACOnet is considered a reasonable alternative to the ISDN service. (orig.) [Deutsch] Ziel: Radiologische Bilddatensaetze sollten fuer telemedizinische Konsultationen mittels des Austrian Academic Computer Network (ACOnet) zwischen den Universitaetskliniken Innsbruck und Graz ausgetauscht werden. Das ACOnet, dessen Benutzung fuer Universitaeten frei ist, entspricht zwischen den Landesuniversitaeten einem MAN (Metropolitan Area Network) mit einer Uebertragungsrate von 4 Mbps. Material und Methode: Die Uebertragung von 1740 Testbilddatensaetzen und 12 radiologischen Konsultationen (620 Bilddatensaetze), ohne Annotationen, zwischen den radiologischen Abteilungen der Universitaetskliniken Innsbruck und Graz zum Zwecke teleradiologischer Konsultationen mittels ACOnet. Ergebnisse: Die Uebertragungen ergaben eine hohe durchschnittliche Uebertragungskapazitaet von 170,2 kBytes/s (94-341 kBytes/s). Das ACOnet erwies sich waehrend der gesamten Testphase als zuverlaessig und praktikabel fuer die Uebertragung teleradiologischer Bilddatensaetze. Schlussfolgerungen: Durch die hohe Uebertragungskapazitaet stellt das ACOnet eine erfolgversprechende Alternative zum ISDN-Service dar

  16. Access to the energy system network simulator (ESNS), via remote computer terminals. [BNL CDC 7600/6600 computer facility

    Energy Technology Data Exchange (ETDEWEB)

    Reisman, A W

    1976-08-15

    The Energy System Network Simulator (ESNS) flow model is installed on the Brookhaven National Laboratory (BNL) CDC 7600/6600 computer facility for access by off-site users. The method of access available to outside users is through a system called CDC-INTERCOM, which allows communication between the BNL machines and remote teletype terminals. This write-up gives a brief description of INTERCOM for users unfamiliar with this system and a step-by-step guide to using INTERCOM in order to access ESNS.

  17. Systems approach to modeling the Token Bucket algorithm in computer networks

    Directory of Open Access Journals (Sweden)

    Ahmed N. U.

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  18. Multi-level security for computer networking: SAC digital network approach

    Energy Technology Data Exchange (ETDEWEB)

    Griess, W.; Poutre, D.L.

    1983-10-01

    For telecommunications systems simultaneously handling data of different security levels, multilevel secure (MLS) operation permits maximum use of resources by automatically providing protection to users with various clearances and needs-to-know. The strategic air command (SAC) is upgrading the primary record data system used to command and control its strategic forces. The upgrade, called the SAC Digital Network (SACDIN), is designed to provide multilevel security to support users and external interfaces, with allowed accesses ranging from unclassified to top secret. SACDIN implements a security kernel based upon the Bell and Lapadula security model. This study presents an overview of the SACDIN security architecture and describes the basic message flow across the MLS network. 7 references.

  19. Spatial-Temporal Reasoning Applications of Computational Intelligence in the Game of Go and Computer Networks

    Science.gov (United States)

    2012-01-01

    potential to surpass the popular MLP. The CSRN was proposed by Werbos and Pang [128] to solve the maze navigation problem and has been improved by...137,138,139] and power applications [140]. Iftekharuddin et. al. applied clustering to solve the maze problem in [141]. Computer Go is considered more than...non-recursive algorithm. The concept is explained best with an analogy. In a team of relay runners , only one runner with a baton runs for the team

  20. Designing optimal transportation networks: a knowledge-based computer-aided multicriteria approach

    Energy Technology Data Exchange (ETDEWEB)

    Tung, S.I.

    1986-01-01

    The dissertation investigates the applicability of using knowledge-based expert systems (KBES) approach to solve the single-mode (automobile), fixed-demand, discrete, multicriteria, equilibrium transportation-network-design problem. Previous works on this problem has found that mathematical programming method perform well on small networks with only one objective. Needed is a solution technique that can be used on large networks having multiple, conflicting criteria with different relative importance weights. The KBES approach developed in this dissertation represents a new way to solve network design problems. The development of an expert system involves three major tasks: knowledge acquisition, knowledge representation, and testing. For knowledge acquisition, a computer aided network design/evaluation model (UFOS) was developed to explore the design space. This study is limited to the problem of designing an optimal transportation network by adding and deleting capacity increments to/from any link in the network. Three weighted criteria were adopted for use in evaluating each design alternative: cost, average V/C ratio, and average travel time.

  1. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence.

  2. Combining Topological Hardware and Topological Software: Color-Code Quantum Computing with Topological Superconductor Networks

    Directory of Open Access Journals (Sweden)

    Daniel Litinski

    2017-09-01

    Full Text Available We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall–superconductor hybrids.

  3. Computer-Aided Analysis of Flow in Water Pipe Networks after a Seismic Event

    Directory of Open Access Journals (Sweden)

    Won-Hee Kang

    2017-01-01

    Full Text Available This paper proposes a framework for a reliability-based flow analysis for a water pipe network after an earthquake. For the first part of the framework, we propose to use a modeling procedure for multiple leaks and breaks in the water pipe segments of a network that has been damaged by an earthquake. For the second part, we propose an efficient system-level probabilistic flow analysis process that integrates the matrix-based system reliability (MSR formulation and the branch-and-bound method. This process probabilistically predicts flow quantities by considering system-level damage scenarios consisting of combinations of leaks and breaks in network pipes and significantly reduces the computational cost by sequentially prioritizing the system states according to their likelihoods and by using the branch-and-bound method to select their partial sets. The proposed framework is illustrated and demonstrated by examining two example water pipe networks that have been subjected to a seismic event. These two examples consist of 11 and 20 pipe segments, respectively, and are computationally modeled considering their available topological, material, and mechanical properties. Considering different earthquake scenarios and the resulting multiple leaks and breaks in the water pipe segments, the water flows in the segments are estimated in a computationally efficient manner.

  4. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  5. Towards Robust and Efficient Computation in Dynamic Peer-to-Peer Networks

    CERN Document Server

    Augustine, John; Robinson, Peter; Upfal, Eli

    2011-01-01

    Motivated by the need for robust and fast distributed computation in highly dynamic Peer-to-Peer (P2P) networks, we study algorithms for the fundamental distributed agreement problem. P2P networks are highly dynamic networks that experience heavy node {\\em churn} (i.e., nodes join and leave the network continuously over time). Our main contributions are randomized distributed algorithms that guarantee {\\em stable almost-everywhere agreement} with high probability even under high adversarial churn in polylogarithmic number of rounds. In particular, we present the following results: 1. An $O(\\log^2 n)$-round ($n$ is the stable network size) randomized algorithm that achieves almost-everywhere agreement with high probability under up to {\\em linear} churn {\\em per round} (i.e., $\\epsilon n$, for some small constant $\\epsilon > 0$), assuming that the churn is controlled by an oblivious adversary (has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power...

  6. A framework of algorithms: computing the bias and prestige of nodes in trust networks.

    Directory of Open Access Journals (Sweden)

    Rong-Hua Li

    Full Text Available A trust network is a social network in which edges represent the trust relationship between two nodes in the network. In a trust network, a fundamental question is how to assess and compute the bias and prestige of the nodes, where the bias of a node measures the trustworthiness of a node and the prestige of a node measures the importance of the node. The larger bias of a node implies the lower trustworthiness of the node, and the larger prestige of a node implies the higher importance of the node. In this paper, we define a vector-valued contractive function to characterize the bias vector which results in a rich family of bias measurements, and we propose a framework of algorithms for computing the bias and prestige of nodes in trust networks. Based on our framework, we develop four algorithms that can calculate the bias and prestige of nodes effectively and robustly. The time and space complexities of all our algorithms are linear with respect to the size of the graph, thus our algorithms are scalable to handle large datasets. We evaluate our algorithms using five real datasets. The experimental results demonstrate the effectiveness, robustness, and scalability of our algorithms.

  7. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  8. Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks

    Directory of Open Access Journals (Sweden)

    Hui-Ping Chen

    2016-11-01

    Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.

  9. An Enhanced Tree-Shaped Adachi-Like Chaotic Neural Network Requiring Linear-Time Computations

    Science.gov (United States)

    Qin, Ke; Oommen, B. John

    The Adachi Neural Network (AdNN) [1-5], is a fascinating Neural Network (NN) which has been shown to possess chaotic properties, and to also demonstrate Associative Memory (AM) and Pattern Recognition (PR) characteristics. Variants of the AdNN [6,7] have also been used to obtain other PR phenomena, and even blurring. A significant problem associated with the AdNN and its variants, is that all of them require a quadratic number of computations. This is essentially because all their NNs are completely connected graphs. In this paper we consider how the computations can be significantly reduced by merely using a linear number of computations. To do this, we extract from the original complete graph, one of its spanning trees. We then compute the weights for this spanning tree in such a manner that the modified tree-based NN has approximately the same input-output characteristics, and thus the new weights are themselves calculated using a gradient-based algorithm. By a detailed experimental analysis, we show that the new linear-time AdNN-like network possesses chaotic and PR properties for different settings. As far as we know, such a tree-based AdNN has not been reported, and the results given here are novel.

  10. Exact computation of the maximum-entropy potential of spiking neural-network models.

    Science.gov (United States)

    Cofré, R; Cessac, B

    2014-05-01

    Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.

  11. Analysis and synthesis of distributed-lumped-active networks by digital computer

    Science.gov (United States)

    1973-01-01

    The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.

  12. Symmetric angular momentum coupling, the quantum volume operator and the 7-spin network: a computational perspective

    CERN Document Server

    Marinelli, Dimitri; Aquilanti, Vincenzo; Anderson, Roger W; Bitencourt, Ana Carla P; Ragni, Mirco

    2014-01-01

    A unified vision of the symmetric coupling of angular momenta and of the quantum mechanical volume operator is illustrated. The focus is on the quantum mechanical angular momentum theory of Wigner's 6j symbols and on the volume operator of the symmetric coupling in spin network approaches: here, crucial to our presentation are an appreciation of the role of the Racah sum rule and the simplification arising from the use of Regge symmetry. The projective geometry approach permits the introduction of a symmetric representation of a network of seven spins or angular momenta. Results of extensive computational investigations are summarized, presented and briefly discussed.

  13. Implementation of Locally Weighted Projection Regression Network for Concurrency Control In Computer Aided Design

    Directory of Open Access Journals (Sweden)

    A.Muthukumaravel

    2011-08-01

    Full Text Available This paper presents implementation of locally weighted projection regression (LWPR network method for concurrency control while developing dial of a fork using Autodesk inventor 2008. The LWPR learns the objects and the type of transactions to be done based on which node in the output layer of the network exceeds a threshold value. Learning stops once all the objects are exposed to LWPR. During testing performance, metrics are analyzed. We have attempted to use LWPR for storing lock information when multi users are working on computer Aided Design (CAD. The memory requirements of the proposed method are minimal in processing locks during transaction.

  14. A knowledge-based system with learning for computer communication network design

    Science.gov (United States)

    Pierre, Samuel; Hoang, Hai Hoc; Tropper-Hausen, Evelyne

    1990-01-01

    Computer communication network design is well-known as complex and hard. For that reason, the most effective methods used to solve it are heuristic. Weaknesses of these techniques are listed and a new approach based on artificial intelligence for solving this problem is presented. This approach is particularly recommended for large packet switched communication networks, in the sense that it permits a high degree of reliability and offers a very flexible environment dealing with many relevant design parameters such as link cost, link capacity, and message delay.

  15. A computational framework for the automated construction of glycosylation reaction networks.

    Directory of Open Access Journals (Sweden)

    Gang Liu

    Full Text Available Glycosylation is among the most common and complex post-translational modifications identified to date. It proceeds through the catalytic action of multiple enzyme families that include the glycosyltransferases that add monosaccharides to growing glycans, and glycosidases which remove sugar residues to trim glycans. The expression level and specificity of these enzymes, in part, regulate the glycan distribution or glycome of specific cell/tissue systems. Currently, there is no systematic method to describe the enzymes and cellular reaction networks that catalyze glycosylation. To address this limitation, we present a streamlined machine-readable definition for the glycosylating enzymes and additional methodologies to construct and analyze glycosylation reaction networks. In this computational framework, the enzyme class is systematically designed to store detailed specificity data such as enzymatic functional group, linkage and substrate specificity. The new classes and their associated functions enable both single-reaction inference and automated full network reconstruction, when given a list of reactants and/or products along with the enzymes present in the system. In addition, graph theory is used to support functions that map the connectivity between two or more species in a network, and that generate subset models to identify rate-limiting steps regulating glycan biosynthesis. Finally, this framework allows the synthesis of biochemical reaction networks using mass spectrometry (MS data. The features described above are illustrated using three case studies that examine: i O-linked glycan biosynthesis during the construction of functional selectin-ligands; ii automated N-linked glycosylation pathway construction; and iii the handling and analysis of glycomics based MS data. Overall, the new computational framework enables automated glycosylation network model construction and analysis by integrating knowledge of glycan structure and enzyme

  16. Interactive granular computations in networks and systems engineering a practical perspective

    CERN Document Server

    Jankowski, Andrzej

    2017-01-01

    The book outlines selected projects conducted under the supervision of the author. Moreover, it discusses significant relations between Interactive Granular Computing (IGrC) and numerous dynamically developing scientific domains worldwide, along with features characteristic of the author’s approach to IGrC. The results presented are a continuation and elaboration of various aspects of Wisdom Technology, initiated and developed in cooperation with Professor Andrzej Skowron. Based on the empirical findings from these projects, the author explores the following areas: (a) understanding the causes of the theory and practice gap problem (TPGP) in complex systems engineering (CSE);(b) generalizing computing models of complex adaptive systems (CAS) (in particular, natural computing models) by constructing an interactive granular computing (IGrC) model of networks of interrelated interacting complex granules (c-granules), belonging to a single agent and/or to a group of agents; (c) developing methodologies based ...

  17. The Prediction of Bandwidth On Need Computer Network Through Artificial Neural Network Method of Backpropagation

    Directory of Open Access Journals (Sweden)

    Ikhthison Mekongga

    2014-02-01

    Full Text Available The need for bandwidth has been increasing recently. This is because the development of internet infrastructure is also increasing so that we need an economic and efficient provider system. This can be achieved through good planning and a proper system. The prediction of the bandwidth consumption is one of the factors that support the planning for an efficient internet service provider system. Bandwidth consumption is predicted using ANN. ANN is an information processing system which has similar characteristics as the biologic al neural network.  ANN  is  chosen  to  predict  the  consumption  of  the  bandwidth  because  ANN  has  good  approachability  to  non-linearity.  The variable used in ANN is the historical load data. A bandwidth consumption information system was built using neural networks  with a backpropagation algorithm to make the use of bandwidth more efficient in the future both in the rental rate of the bandwidth and in the usage of the bandwidth.Keywords: Forecasting, Bandwidth, Backpropagation

  18. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Directory of Open Access Journals (Sweden)

    Lukas Falat

    2016-01-01

    Full Text Available This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  19. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    Science.gov (United States)

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  20. Building Model for the University of Mosul Computer Network Using OPNET Simulator

    Directory of Open Access Journals (Sweden)

    Modhar Modhar A. Hammoudi

    2013-04-01

    Full Text Available This paper aims at establishing a model in OPNET (Optimized Network Engineering Tool simulator for the University of Mosul computer network. The proposed network model was made up of two routers (Cisco 2600, core switch (Cisco6509, two servers, ip 32 cloud and 37 VLANs. These VLANs were connected to the core switch using fiber optic cables (1000BaseX. Three applications were added to test the network model. These applications were FTP (File Transfer Protocol, HTTP (Hyper Text Transfer Protocol and VoIP (Voice over Internet Protocol. The results showed that the proposed model had a positive efficiency on designing and managing the targeted network and can be used to view the data flow in it. Also, the simulation results showed that the maximum number of VoIP service users could be raised upto 5000 users when working under IP Telephony. This means that the ability to utilize VoIP service in this network can be maintained and is better when subjected to IP telephony scheme.