WorldWideScience

Sample records for system scalability architecture

  1. Architectures and Applications for Scalable Quantum Information Systems

    2007-01-01

    Gershenfeld and I. Chuang. Quantum computing with molecules. Scientific American, June 1998. [16] A. Globus, D. Bailey, J. Han, R. Jaffe, C. Levit , R...AFRL-IF-RS-TR-2007-12 Final Technical Report January 2007 ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS...NUMBER 5b. GRANT NUMBER FA8750-01-2-0521 4. TITLE AND SUBTITLE ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS 5c

  2. Architectural Techniques to Enable Reliable and Scalable Memory Systems

    Nair, Prashant J.

    2017-01-01

    High capacity and scalable memory systems play a vital role in enabling our desktops, smartphones, and pervasive technologies like Internet of Things (IoT). Unfortunately, memory systems are becoming increasingly prone to faults. This is because we rely on technology scaling to improve memory density, and at small feature sizes, memory cells tend to break easily. Today, memory reliability is seen as the key impediment towards using high-density devices, adopting new technologies, and even bui...

  3. A scalable healthcare information system based on a service-oriented architecture.

    Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei

    2011-06-01

    Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.

  4. Novel flat datacenter network architecture based on scalable and flow-controlled optical switch system

    Miao, W.; Luo, J.; Di Lucente, S.; Dorren, H.J.S.; Calabretta, N.

    2013-01-01

    We propose and demonstrate an optical flat datacenter network based on scalable optical switch system with optical flow control. 4×4 dynamic switch operation at 40 Gb/s reported 300ns minimum end-to-end latency (including 25m transmission link) and

  5. Novel flat datacenter network architecture based on scalable and flow-controlled optical switch system.

    Miao, Wang; Luo, Jun; Di Lucente, Stefano; Dorren, Harm; Calabretta, Nicola

    2014-02-10

    We propose and demonstrate an optical flat datacenter network based on scalable optical switch system with optical flow control. Modular structure with distributed control results in port-count independent optical switch reconfiguration time. RF tone in-band labeling technique allowing parallel processing of the label bits ensures the low latency operation regardless of the switch port-count. Hardware flow control is conducted at optical level by re-using the label wavelength without occupying extra bandwidth, space, and network resources which further improves the performance of latency within a simple structure. Dynamic switching including multicasting operation is validated for a 4 x 4 system. Error free operation of 40 Gb/s data packets has been achieved with only 1 dB penalty. The system could handle an input load up to 0.5 providing a packet loss lower that 10(-5) and an average latency less that 500 ns when a buffer size of 16 packets is employed. Investigation on scalability also indicates that the proposed system could potentially scale up to large port count with limited power penalty.

  6. Design for scalability in 3D computer graphics architectures

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...

  7. Architecture and Implementation of a Scalable Sensor Data Storage and Analysis System Using Cloud Computing and Big Data Technologies

    Galip Aydin

    2015-01-01

    Full Text Available Sensors are becoming ubiquitous. From almost any type of industrial applications to intelligent vehicles, smart city applications, and healthcare applications, we see a steady growth of the usage of various types of sensors. The rate of increase in the amount of data produced by these sensors is much more dramatic since sensors usually continuously produce data. It becomes crucial for these data to be stored for future reference and to be analyzed for finding valuable information, such as fault diagnosis information. In this paper we describe a scalable and distributed architecture for sensor data collection, storage, and analysis. The system uses several open source technologies and runs on a cluster of virtual servers. We use GPS sensors as data source and run machine-learning algorithms for data analysis.

  8. Design issues for numerical libraries on scalable multicore architectures

    Heroux, M A

    2008-01-01

    Future generations of scalable computers will rely on multicore nodes for a significant portion of overall system performance. At present, most applications and libraries cannot exploit multiple cores beyond running addition MPI processes per node. In this paper we discuss important multicore architecture issues, programming models, algorithms requirements and software design related to effective use of scalable multicore computers. In particular, we focus on important issues for library research and development, making recommendations for how to effectively develop libraries for future scalable computer systems

  9. A Multi-Time Scale Morphable Software Milieu for Polymorphous Computing Architectures (PCA) - Composable, Scalable Systems

    Skjellum, Anthony

    2004-01-01

    Polymorphous Computing Architectures (PCA) rapidly "morph" (reorganize) software and hardware configurations in order to achieve high performance on computation styles ranging from specialized streaming to general threaded applications...

  10. Architecture Knowledge for Evaluating Scalable Databases

    2015-01-16

    Architecture Knowledge for Evaluating Scalable Databases 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Nurgaliev... Scala , Erlang, Javascript Cursor-based queries Supported, Not Supported JOIN queries Supported, Not Supported Complex data types Lists, maps, sets...is therefore needed, using technology such as machine learning to extract content from product documentation. The terminology used in the database

  11. High performance SDN enabled flat data center network architecture based on scalable and flow-controlled optical switching system

    Calabretta, N.; Miao, W.; Dorren, H.J.S.

    2015-01-01

    We demonstrate a reconfigurable virtual datacenter network by utilizing statistical multiplexing offered by scalable and flow-controlled optical switching system. Results show QoS guarantees by the priority assignment and load balancing for applications in virtual networks.

  12. Developing Scalable Information Security Systems

    Valery Konstantinovich Ablekov

    2013-06-01

    Full Text Available Existing physical security systems has wide range of lacks, including: high cost, a large number of vulnerabilities, problems of modification and support system. This paper covers an actual problem of developing systems without this list of drawbacks. The paper presents the architecture of the information security system, which operates through the network protocol TCP/IP, including the ability to connect different types of devices and integration with existing security systems. The main advantage is a significant increase in system reliability, scalability, both vertically and horizontally, with minimal cost of both financial and time resources.

  13. A Scalable Communication Architecture for Advanced Metering Infrastructure

    Ngo Hoang , Giang; Liquori , Luigi; Nguyen Chan , Hung

    2013-01-01

    Advanced Metering Infrastructure (AMI), seen as foundation for overall grid modernization, is an integration of many technologies that provides an intelligent connection between consumers and system operators [ami 2008]. One of the biggest challenge that AMI faces is to scalable collect and manage a huge amount of data from a large number of customers. In our paper, we address this challenge by introducing a mixed peer-to-peer (P2P) and client-server communication architecture for AMI in whic...

  14. A proposed scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC detectors

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; Vanberg, R.

    1990-01-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequence, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a proposed new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of Gigabytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the proposed Scalable Parallel Open Architecture data acquisition system are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build a prototype of the proposed data acquisition system architecture is given in the paper. The major component of the system, a self-routing parallel event builder, is described in detail

  15. Photonic Architecture for Scalable Quantum Information Processing in Diamond

    Kae Nemoto

    2014-08-01

    Full Text Available Physics and information are intimately connected, and the ultimate information processing devices will be those that harness the principles of quantum mechanics. Many physical systems have been identified as candidates for quantum information processing, but none of them are immune from errors. The challenge remains to find a path from the experiments of today to a reliable and scalable quantum computer. Here, we develop an architecture based on a simple module comprising an optical cavity containing a single negatively charged nitrogen vacancy center in diamond. Modules are connected by photons propagating in a fiber-optical network and collectively used to generate a topological cluster state, a robust substrate for quantum information processing. In principle, all processes in the architecture can be deterministic, but current limitations lead to processes that are probabilistic but heralded. We find that the architecture enables large-scale quantum information processing with existing technology.

  16. A Massively Scalable Architecture for Instant Messaging & Presence

    Schippers, Jorrit; Remke, Anne Katharina Ingrid; Punt, Henk; Wegdam, M.; Haverkort, Boudewijn R.H.M.; Thomas, N.; Bradley, J.; Knottenbelt, W.; Dingle, N.; Harder, U.

    2010-01-01

    This paper analyzes the scalability of Instant Messaging & Presence (IM&P) architectures. We take a queueing-based modelling and analysis approach to ��?nd the bottlenecks of the current IM&P architecture at the Dutch social network Hyves, as well as of alternative architectures. We use the

  17. An open, interoperable, and scalable prehospital information technology network architecture.

    Landman, Adam B; Rokos, Ivan C; Burns, Kevin; Van Gelder, Carin M; Fisher, Roger M; Dunford, James V; Cone, David C; Bogucki, Sandy

    2011-01-01

    Some of the most intractable challenges in prehospital medicine include response time optimization, inefficiencies at the emergency medical services (EMS)-emergency department (ED) interface, and the ability to correlate field interventions with patient outcomes. Information technology (IT) can address these and other concerns by ensuring that system and patient information is received when and where it is needed, is fully integrated with prior and subsequent patient information, and is securely archived. Some EMS agencies have begun adopting information technologies, such as wireless transmission of 12-lead electrocardiograms, but few agencies have developed a comprehensive plan for management of their prehospital information and integration with other electronic medical records. This perspective article highlights the challenges and limitations of integrating IT elements without a strategic plan, and proposes an open, interoperable, and scalable prehospital information technology (PHIT) architecture. The two core components of this PHIT architecture are 1) routers with broadband network connectivity to share data between ambulance devices and EMS system information services and 2) an electronic patient care report to organize and archive all electronic prehospital data. To successfully implement this comprehensive PHIT architecture, data and technology requirements must be based on best available evidence, and the system must adhere to health data standards as well as privacy and security regulations. Recent federal legislation prioritizing health information technology may position federal agencies to help design and fund PHIT architectures.

  18. High-performance, scalable optical network-on-chip architectures

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  19. A Scalable Architecture for VoIP Conferencing

    R Venkatesha Prasad

    2003-10-01

    Full Text Available Real-Time services are traditionally supported on circuit switched network. However, there is a need to port these services on packet switched network. Architecture for audio conferencing application over the Internet in the light of ITU-T H.323 recommendations is considered. In a conference, considering packets only from a set of selected clients can reduce speech quality degradation because mixing packets from all clients can lead to lack of speech clarity. A distributed algorithm and architecture for selecting clients for mixing is suggested here based on a new quantifier of the voice activity called "Loudness Number" (LN. The proposed system distributes the computation load and reduces the load on client terminals. The highlights of this architecture are scalability, bandwidth saving and speech quality enhancement. Client selection for playing out tries to mimic a physical conference where the most vocal participants attract more attention. The contributions of the paper are expected to aid H.323 recommendations implementations for Multipoint Processors (MP. A working prototype based on the proposed architecture is already functional.

  20. Scalable Distributed Architectures for Information Retrieval

    Lu, Zhihong

    1999-01-01

    .... Our distributed architectures exploit parallelism in information retrieval on a cluster of parallel IR servers using symmetric multiprocessors, and use partial collection replication and selection...

  1. Scalable software architectures for decision support.

    Musen, M A

    1999-12-01

    Interest in decision-support programs for clinical medicine soared in the 1970s. Since that time, workers in medical informatics have been particularly attracted to rule-based systems as a means of providing clinical decision support. Although developers have built many successful applications using production rules, they also have discovered that creation and maintenance of large rule bases is quite problematic. In the 1980s, several groups of investigators began to explore alternative programming abstractions that can be used to build decision-support systems. As a result, the notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) problem-solving methods--domain-independent algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper highlights how developers can construct large, maintainable decision-support systems using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.

  2. Adaptable Energy Systems Integration by Modular, Standardized and Scalable System Architectures: Necessities and Prospects of Any Time Transition

    Jonas Hinker

    2018-03-01

    Full Text Available Energy conversion and distribution of heat and electricity is characterized by long planning horizons, investment periods and depreciation times, and it is thus difficult to plan and tell the technology that optimally fits for decades. Uncertainties include future energy prices, applicable subsidies, regulation, and even the evolution of market designs. To achieve higher adaptability to arbitrary transition paths, a technical concept based on integrated energy systems is envisioned and described. The problem of intermediate steps of evolution is tackled by introducing a novel paradigm in urban infrastructure design. It builds on standardization, modularization and economies of scale for underlying conversion units. Building on conceptual arguments for such a platform, it is then argued how actors like (among others municipalities and district heating system operators can use this as a practical starting point for a manageable and smooth transition towards more environmental friendly supply technologies, and to commit to their own pace of transition (bearable investment/risk. Merits are not only supported by technical arguments but also by strategical and societal prospects like technology neutrality and availability of real options.

  3. Data Intensive Architecture for Scalable Cyber Analytics

    Olsen, Bryan K.; Johnson, John R.; Critchlow, Terence J.

    2011-12-19

    Cyber analysts are tasked with the identification and mitigation of network exploits and threats. These compromises are difficult to identify due to the characteristics of cyber communication, the volume of traffic, and the duration of possible attack. In this paper, we describe a prototype implementation designed to provide cyber analysts an environment where they can interactively explore a month’s worth of cyber security data. This prototype utilized On-Line Analytical Processing (OLAP) techniques to present a data cube to the analysts. The cube provides a summary of the data, allowing trends to be easily identified as well as the ability to easily pull up the original records comprising an event of interest. The cube was built using SQL Server Analysis Services (SSAS), with the interface to the cube provided by Tableau. This software infrastructure was supported by a novel hardware architecture comprising a Netezza TwinFin® for the underlying data warehouse and a cube server with a FusionIO drive hosting the data cube. We evaluated this environment on a month’s worth of artificial, but realistic, data using multiple queries provided by our cyber analysts. As our results indicate, OLAP technology has progressed to the point where it is in a unique position to provide novel insights to cyber analysts, as long as it is supported by an appropriate data intensive architecture.

  4. Data Intensive Architecture for Scalable Cyber Analytics

    Olsen, Bryan K.; Johnson, John R.; Critchlow, Terence J.

    2011-11-15

    Cyber analysts are tasked with the identification and mitigation of network exploits and threats. These compromises are difficult to identify due to the characteristics of cyber communication, the volume of traffic, and the duration of possible attack. It is necessary to have analytical tools to help analysts identify anomalies that span seconds, days, and weeks. Unfortunately, providing analytical tools effective access to the volumes of underlying data requires novel architectures, which is often overlooked in operational deployments. Our work is focused on a summary record of communication, called a flow. Flow records are intended to summarize a communication session between a source and a destination, providing a level of aggregation from the base data. Despite this aggregation, many enterprise network perimeter sensors store millions of network flow records per day. The volume of data makes analytics difficult, requiring the development of new techniques to efficiently identify temporal patterns and potential threats. The massive volume makes analytics difficult, but there are other characteristics in the data which compound the problem. Within the billions of records of communication that transact, there are millions of distinct IP addresses involved. Characterizing patterns of entity behavior is very difficult with the vast number of entities that exist in the data. Research has struggled to validate a model for typical network behavior with hopes it will enable the identification of atypical behavior. Complicating matters more, typically analysts are only able to visualize and interact with fractions of data and have the potential to miss long term trends and behaviors. Our analysis approach focuses on aggregate views and visualization techniques to enable flexible and efficient data exploration as well as the capability to view trends over long periods of time. Realizing that interactively exploring summary data allowed analysts to effectively identify

  5. Scalable multifunction RF system concepts for joint operations

    Otten, M.P.G.; Wit, J.J.M. de; Smits, F.M.A.; Rossum, W.L. van; Huizing, A.

    2010-01-01

    RF systems based on modular architectures have the potential of better re-use of technology, decreasing development time, and decreasing life cycle cost. Moreover, modular architectures provide scalability, allowing low cost upgrades and adaptability to different platforms. To achieve maximum

  6. A Scalable Architecture of a Structured LDPC Decoder

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  7. Scalability of RAID Systems

    Li, Yan

    2010-01-01

    RAID systems (Redundant Arrays of Inexpensive Disks) have dominated backend storage systems for more than two decades and have grown continuously in size and complexity. Currently they face unprecedented challenges from data intensive applications such as image processing, transaction processing and data warehousing. As the size of RAID systems increases, designers are faced with both performance and reliability challenges. These challenges include limited back-end network band...

  8. Application Scalability and Performance on Multicore Architectures

    Simon, Tyler A; Cable, Sam B; Mahmoodi, Mahin

    2007-01-01

    The US Army Engineer Research and Development Center Major Shared Resource Center has recently upgraded its Cray XT3 system from single-core to dual-core AMD Opteron processors and has procured a quad...

  9. Developing a scalable modeling architecture for studying survivability technologies

    Mohammad, Syed; Bounker, Paul; Mason, James; Brister, Jason; Shady, Dan; Tucker, David

    2006-05-01

    To facilitate interoperability of models in a scalable environment, and provide a relevant virtual environment in which Survivability technologies can be evaluated, the US Army Research Development and Engineering Command (RDECOM) Modeling Architecture for Technology Research and Experimentation (MATREX) Science and Technology Objective (STO) program has initiated the Survivability Thread which will seek to address some of the many technical and programmatic challenges associated with the effort. In coordination with different Thread customers, such as the Survivability branches of various Army labs, a collaborative group has been formed to define the requirements for the simulation environment that would in turn provide them a value-added tool for assessing models and gauge system-level performance relevant to Future Combat Systems (FCS) and the Survivability requirements of other burgeoning programs. An initial set of customer requirements has been generated in coordination with the RDECOM Survivability IPT lead, through the Survivability Technology Area at RDECOM Tank-automotive Research Development and Engineering Center (TARDEC, Warren, MI). The results of this project are aimed at a culminating experiment and demonstration scheduled for September, 2006, which will include a multitude of components from within RDECOM and provide the framework for future experiments to support Survivability research. This paper details the components with which the MATREX Survivability Thread was created and executed, and provides insight into the capabilities currently demanded by the Survivability faculty within RDECOM.

  10. Scalable architecture for a room temperature solid-state quantum information processor.

    Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D

    2012-04-24

    The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.

  11. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  12. A Software and Hardware IPTV Architecture for Scalable DVB Distribution

    Georg Acher

    2009-01-01

    Full Text Available Many standards and even more proprietary technologies deal with IP-based television (IPTV. But none of them can transparently map popular public broadcast services such as DVB or ATSC to IPTV with acceptable effort. In this paper we explain why we believe that such a mapping using a light weight framework is an important step towards all-IP multimedia. We then present the NetCeiver architecture: it is based on well-known standards such as IPv6, and it allows zero configuration. The use of multicast streaming makes NetCeiver highly scalable. We also describe a low cost FPGA implementation of the proposed NetCeiver architecture, which can concurrently stream services from up to six full transponders.

  13. Requirements for Scalable Access Control and Security Management Architectures

    Keromytis, Angelos D; Smith, Jonathan M

    2005-01-01

    Maximizing local autonomy has led to a scalable Internet. Scalability and the capacity for distributed control have unfortunately not extended well to resource access control policies and mechanisms...

  14. A scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC [Superconducting Super Collider] detectors

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; VanBerg, R.

    1989-12-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequences, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of GigaBytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the system architecture are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build the self-routing parallel event builder will also be given in the paper. 3 figs., 1 tab

  15. Embedded High Performance Scalable Computing Systems

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  16. Systemic Architecture

    Poletto, Marco; Pasquero, Claudia

    -up or tactical design, behavioural space and the boundary of the natural and the artificial realms within the city and architecture. A new kind of "real-time world-city" is illustrated in the form of an operational design manual for the assemblage of proto-architectures, the incubation of proto-gardens...... and the coding of proto-interfaces. These prototypes of machinic architecture materialize as synthetic hybrids embedded with biological life (proto-gardens), computational power, behavioural responsiveness (cyber-gardens), spatial articulation (coMachines and fibrous structures), remote sensing (FUNclouds...

  17. ATLAS Grid Data Processing: system evolution and scalability

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  18. A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures

    Piero Colli Franzone

    2018-04-01

    Full Text Available We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1 the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2 the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3 the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4 the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks.

  19. A Scalable, Timing-Safe, Network-on-Chip Architecture with an Integrated Clock Distribution Method

    Bjerregaard, Tobias; Stensgaard, Mikkel Bystrup; Sparsø, Jens

    2007-01-01

    Growing system sizes together with increasing performance variability are making globally synchronous operation hard to realize. Mesochronous clocking constitutes a possible solution to the problems faced. The most fundamental of problems faced when communicating between mesochronously clocked re...... is based purely on local observations. It is demonstrated with a 90 nm CMOS standard cell network-on-chip design which implements completely timing-safe, global communication in a modular system......Growing system sizes together with increasing performance variability are making globally synchronous operation hard to realize. Mesochronous clocking constitutes a possible solution to the problems faced. The most fundamental of problems faced when communicating between mesochronously clocked...... regions concerns the possibility of data corruption caused by metastability. This paper presents an integrated communication and mesochronous clocking strategy, which avoids timing related errors while maintaining a globally synchronous system perspective. The architecture is scalable as timing integrity...

  20. Scalable Multi-core Architectures Design Methodologies and Tools

    Jantsch, Axel

    2012-01-01

    As Moore’s law continues to unfold, two important trends have recently emerged. First, the growth of chip capacity is translated into a corresponding increase of number of cores. Second, the parallalization of the computation and 3D integration technologies lead to distributed memory architectures. This book provides a current snapshot of industrial and academic research, conducted as part of the European FP7 MOSART project, addressing urgent challenges in many-core architectures and application mapping.  It addresses the architectural design of many core chips, memory and data management, power management, design and programming methodologies. It also describes how new techniques have been applied in various industrial case studies. Describes trends towards distributed memory architectures and distributed power management; Integrates Network on Chip with distributed, shared memory architectures; Demonstrates novel design methodologies and frameworks for multi-core design space exploration; Shows how midll...

  1. Frontier: High Performance Database Access Using Standard Web Components in a Scalable Multi-Tier Architecture

    Kosyakov, S.; Kowalkowski, J.; Litvintsev, D.; Lueking, L.; Paterno, M.; White, S.P.; Autio, Lauri; Blumenfeld, B.; Maksimovic, P.; Mathis, M.

    2004-01-01

    A high performance system has been assembled using standard web components to deliver database information to a large number of broadly distributed clients. The CDF Experiment at Fermilab is establishing processing centers around the world imposing a high demand on their database repository. For delivering read-only data, such as calibrations, trigger information, and run conditions data, we have abstracted the interface that clients use to retrieve data objects. A middle tier is deployed that translates client requests into database specific queries and returns the data to the client as XML datagrams. The database connection management, request translation, and data encoding are accomplished in servlets running under Tomcat. Squid Proxy caching layers are deployed near the Tomcat servers, as well as close to the clients, to significantly reduce the load on the database and provide a scalable deployment model. Details the system's construction and use are presented, including its architecture, design, interfaces, administration, performance measurements, and deployment plan

  2. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    Antichi, Gianni

    2017-10-25

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  3. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  4. Experimental demonstration of an improved EPON architecture using OFDMA for bandwidth scalable LAN emulation

    Deng, Lei; Zhao, Ying; Yu, Xianbin

    2011-01-01

    We propose and demonstrate an improved Ethernet passive optical network (EPON) architecture supporting bandwidth-scalable physical layer local area network (LAN) emulation. Due to the use of orthogonal frequency division multiple access (OFDMA) technology for the LAN traffic transmission, there i...

  5. Historical building monitoring using an energy-efficient scalable wireless sensor network architecture.

    Capella, Juan V; Perles, Angel; Bonastre, Alberto; Serrano, Juan J

    2011-01-01

    We present a set of novel low power wireless sensor nodes designed for monitoring wooden masterpieces and historical buildings, in order to perform an early detection of pests. Although our previous star-based system configuration has been in operation for more than 13 years, it does not scale well for sensorization of large buildings or when deploying hundreds of nodes. In this paper we demonstrate the feasibility of a cluster-based dynamic-tree hierarchical Wireless Sensor Network (WSN) architecture where realistic assumptions of radio frequency data transmission are applied to cluster construction, and a mix of heterogeneous nodes are used to minimize economic cost of the whole system and maximize power saving of the leaf nodes. Simulation results show that the specialization of a fraction of the nodes by providing better antennas and some energy harvesting techniques can dramatically extend the life of the entire WSN and reduce the cost of the whole system. A demonstration of the proposed architecture with a new routing protocol and applied to termite pest detection has been implemented on a set of new nodes and should last for about 10 years, but it provides better scalability, reliability and deployment properties.

  6. Historical Building Monitoring Using an Energy-Efficient Scalable Wireless Sensor Network Architecture

    Capella, Juan V.; Perles, Angel; Bonastre, Alberto; Serrano, Juan J.

    2011-01-01

    We present a set of novel low power wireless sensor nodes designed for monitoring wooden masterpieces and historical buildings, in order to perform an early detection of pests. Although our previous star-based system configuration has been in operation for more than 13 years, it does not scale well for sensorization of large buildings or when deploying hundreds of nodes. In this paper we demonstrate the feasibility of a cluster-based dynamic-tree hierarchical Wireless Sensor Network (WSN) architecture where realistic assumptions of radio frequency data transmission are applied to cluster construction, and a mix of heterogeneous nodes are used to minimize economic cost of the whole system and maximize power saving of the leaf nodes. Simulation results show that the specialization of a fraction of the nodes by providing better antennas and some energy harvesting techniques can dramatically extend the life of the entire WSN and reduce the cost of the whole system. A demonstration of the proposed architecture with a new routing protocol and applied to termite pest detection has been implemented on a set of new nodes and should last for about 10 years, but it provides better scalability, reliability and deployment properties. PMID:22346630

  7. Scalable Intersample Interpolation Architecture for High-channel-count Beamformers

    Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt

    2011-01-01

    Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....

  8. An ODMG-compatible testbed architecture for scalable management and analysis of physics data

    Malon, D.M.; May, E.N.

    1997-01-01

    This paper describes a testbed architecture for the investigation and development of scalable approaches to the management and analysis of massive amounts of high energy physics data. The architecture has two components: an interface layer that is compliant with a substantial subset of the ODMG-93 Version 1.2 specification, and a lightweight object persistence manager that provides flexible storage and retrieval services on a variety of single- and multi-level storage architectures, and on a range of parallel and distributed computing platforms

  9. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  10. The TOTEM DAQ based on the Scalable Readout System (SRS)

    Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio

    2018-02-01

    The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.

  11. C-HEAP : a heterogeneous multi-processor architecture template and scalable and flexible protocol for the design of embedded signal processing systems

    Nieuwland, A.K.; Kang, J.; Gangwal, O.P.; Sethuraman, R.; Busá, N.G.; Goossens, K.G.W.; Peset Llopis, R.; Lippens, P.E.R.

    2002-01-01

    The key issue in the design of Systems-on-a-Chip (SoC) is to trade-off efficiency against flexibility, and time to market versus cost. Current deep submicron processing technologies enable integration of multiple software programmable processors (e.g., CPUs, DSPs) and dedicated hardware components

  12. A versatile scalable PET processing system

    Dong, H.; Weisenberger, A.; McKisson, J.; Wenze, Xi; Cuevas, C.; Wilson, J.; Zukerman, L.

    2011-01-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  13. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  14. Design of a Scalable Event Notification Service: Interface and Architecture

    Carzaniga, Antonio; Rosenblum, David S; Wolf, Alexander L

    1998-01-01

    Event-based distributed systems are programmed to operate in response to events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems...

  15. Scalable Faceted Ranking in Tagging Systems

    Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.

    Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.

  16. SRC: FenixOS - A Research Operating System Focused on High Scalability and Reliability

    Passas, Stavros; Karlsson, Sven

    2011-01-01

    Computer systems keep increasing in size. Systems scale in the number of processing units, memories and peripheral devices. This creates many and diverse architectural trade-offs that the existing operating systems are not able to address. We are designing and implementing, FenixOS, a new operating...... of the operating system....... system that aims to improve the state of the art in scalability and reliability. We achieve scalability through limiting data sharing when possible, and through extensive use of lock-free data structures. Reliability is addressed with a careful re-design of the programming interface and structure...

  17. Scalable Adaptive Architectures for Maritime Operations Center Command and Control

    2011-05-06

    JTFSv«t-i 3 WAN TOMVnarMTCOM :FPCiFQPSg SSLOt, TH : Swvw Version 2 (Augment Including Reach beck to FPC CapeblMy Ashore for additional Staff...rule- baaed expert system. Al Magazine, 3(4), 16-21. Suwa, M., A. C Scott and E. H. Shortliffe (1985). Completeness and consistency in a rule-based

  18. Scalable Notch Antenna System for Multiport Applications

    Abdurrahim Toktas

    2016-01-01

    Full Text Available A novel and compact scalable antenna system is designed for multiport applications. The basic design is built on a square patch with an electrical size of 0.82λ0×0.82λ0 (at 2.4 GHz on a dielectric substrate. The design consists of four symmetrical and orthogonal triangular notches with circular feeding slots at the corners of the common patch. The 4-port antenna can be simply rearranged to 8-port and 12-port systems. The operating band of the system can be tuned by scaling (S the size of the system while fixing the thickness of the substrate. The antenna system with S: 1/1 in size of 103.5×103.5 mm2 operates at the frequency band of 2.3–3.0 GHz. By scaling the antenna with S: 1/2.3, a system of 45×45 mm2 is achieved, and thus the operating band is tuned to 4.7–6.1 GHz with the same scattering characteristic. A parametric study is also conducted to investigate the effects of changing the notch dimensions. The performance of the antenna is verified in terms of the antenna characteristics as well as diversity and multiplexing parameters. The antenna system can be tuned by scaling so that it is applicable to the multiport WLAN, WIMAX, and LTE devices with port upgradability.

  19. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  20. CODA: A scalable, distributed data acquisition system

    Watson, W.A. III; Chen, J.; Heyes, G.; Jastrzembski, E.; Quarrie, D.

    1994-01-01

    A new data acquisition system has been designed for physics experiments scheduled to run at CEBAF starting in the summer of 1994. This system runs on Unix workstations connected via ethernet, FDDI, or other network hardware to multiple intelligent front end crates -- VME, CAMAC or FASTBUS. CAMAC crates may either contain intelligent processors, or may be interfaced to VME. The system is modular and scalable, from a single front end crate and one workstation linked by ethernet, to as may as 32 clusters of front end crates ultimately connected via a high speed network to a set of analysis workstations. The system includes an extensible, device independent slow controls package with drivers for CAMAC, VME, and high voltage crates, as well as a link to CEBAF accelerator controls. All distributed processes are managed by standard remote procedure calls propagating change-of-state requests, or reading and writing program variables. Custom components may be easily integrated. The system is portable to any front end processor running the VxWorks real-time kernel, and to most workstations supplying a few standard facilities such as rsh and X-windows, and Motif and socket libraries. Sample implementations exist for 2 Unix workstation families connected via ethernet or FDDI to VME (with interfaces to FASTBUS or CAMAC), and via ethernet to FASTBUS or CAMAC

  1. Scalable quantum computer architecture with coupled donor-quantum dot qubits

    Schenkel, Thomas; Lo, Cheuk Chi; Weis, Christoph; Lyon, Stephen; Tyryshkin, Alexei; Bokor, Jeffrey

    2014-08-26

    A quantum bit computing architecture includes a plurality of single spin memory donor atoms embedded in a semiconductor layer, a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, wherein a first voltage applied across at least one pair of the aligned quantum dot and donor atom controls a donor-quantum dot coupling. A method of performing quantum computing in a scalable architecture quantum computing apparatus includes arranging a pattern of single spin memory donor atoms in a semiconductor layer, forming a plurality of quantum dots arranged with the semiconductor layer and aligned with the donor atoms, applying a first voltage across at least one aligned pair of a quantum dot and donor atom to control a donor-quantum dot coupling, and applying a second voltage between one or more quantum dots to control a Heisenberg exchange J coupling between quantum dots and to cause transport of a single spin polarized electron between quantum dots.

  2. Space-Filling Supercapacitor Carpets: Highly scalable fractal architecture for energy storage

    Tiliakos, Athanasios; Trefilov, Alexandra M. I.; Tanasǎ, Eugenia; Balan, Adriana; Stamatin, Ioan

    2018-04-01

    Revamping ground-breaking ideas from fractal geometry, we propose an alternative micro-supercapacitor configuration realized by laser-induced graphene (LIG) foams produced via laser pyrolysis of inexpensive commercial polymers. The Space-Filling Supercapacitor Carpet (SFSC) architecture introduces the concept of nested electrodes based on the pre-fractal Peano space-filling curve, arranged in a symmetrical equilateral setup that incorporates multiple parallel capacitor cells sharing common electrodes for maximum efficiency and optimal length-to-area distribution. We elucidate on the theoretical foundations of the SFSC architecture, and we introduce innovations (high-resolution vector-mode printing) in the LIG method that allow for the realization of flexible and scalable devices based on low iterations of the Peano algorithm. SFSCs exhibit distributed capacitance properties, leading to capacitance, energy, and power ratings proportional to the number of nested electrodes (up to 4.3 mF, 0.4 μWh, and 0.2 mW for the largest tested model of low iteration using aqueous electrolytes), with competitively high energy and power densities. This can pave the road for full scalability in energy storage, reaching beyond the scale of micro-supercapacitors for incorporating into larger and more demanding applications.

  3. PM2006: a highly scalable urban planning management information system--Case study: Suzhou Urban Planning Bureau

    Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie

    2008-10-01

    During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.

  4. Radiology systems architecture.

    Deibel, S R; Greenes, R A

    1996-05-01

    This article focuses on the software requirements for enterprise integration in radiology. The needs of a future radiology systems architecture are examined, both at a concrete functional level and at an abstract system-properties level. A component-based approach to software development is described and is validated in the context of each of the abstract system requirements for future radiology computing environments.

  5. A Testbed for Highly-Scalable Mission Critical Information Systems

    Birman, Kenneth P

    2005-01-01

    ... systems in a networked environment. Headed by Professor Ken Birman, the project is exploring a novel fusion of classical protocols for reliable multicast communication with a new style of peer-to-peer protocol called scalable "gossip...

  6. Scalable devices

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  7. A scalable architecture for online anomaly detection of WLCG batch jobs

    Kuehn, E.; Fischer, M.; Giffels, M.; Jung, C.; Petzold, A.

    2016-10-01

    For data centres it is increasingly important to monitor the network usage, and learn from network usage patterns. Especially configuration issues or misbehaving batch jobs preventing a smooth operation need to be detected as early as possible. At the GridKa data and computing centre we therefore operate a tool BPNetMon for monitoring traffic data and characteristics of WLCG batch jobs and pilots locally on different worker nodes. On the one hand local information itself are not sufficient to detect anomalies for several reasons, e.g. the underlying job distribution on a single worker node might change or there might be a local misconfiguration. On the other hand a centralised anomaly detection approach does not scale regarding network communication as well as computational costs. We therefore propose a scalable architecture based on concepts of a super-peer network.

  8. Open architecture CNC system

    Tal, J. [Galil Motion Control Inc., Sunnyvale, CA (United States); Lopez, A.; Edwards, J.M. [Los Alamos National Lab., NM (United States)

    1995-04-01

    In this paper, an alternative solution to the traditional CNC machine tool controller has been introduced. Software and hardware modules have been described and their incorporation in a CNC control system has been outlined. This type of CNC machine tool controller demonstrates that technology is accessible and can be readily implemented into an open architecture machine tool controller. Benefit to the user is greater controller flexibility, while being economically achievable. PC based, motion as well as non-motion features will provide flexibility through a Windows environment. Up-grading this type of controller system through software revisions will keep the machine tool in a competitive state with minimal effort. Software and hardware modules are mass produced permitting competitive procurement and incorporation. Open architecture CNC systems provide diagnostics thus enhancing maintainability, and machine tool up-time. A major concern of traditional CNC systems has been operator training time. Training time can be greatly minimized by making use of Windows environment features.

  9. Decentralized control of a scalable photovoltaic (PV)-battery hybrid power system

    Kim, Myungchin; Bae, Sungwoo

    2017-01-01

    Highlights: • This paper introduces the design and control of a PV-battery hybrid power system. • Reliable and scalable operation of hybrid power systems is achieved. • System and power control are performed without a centralized controller. • Reliability and scalability characteristics are studied in a quantitative manner. • The system control performance is verified using realistic solar irradiation data. - Abstract: This paper presents the design and control of a sustainable standalone photovoltaic (PV)-battery hybrid power system (HPS). The research aims to develop an approach that contributes to increased level of reliability and scalability for an HPS. To achieve such objectives, a PV-battery HPS with a passively connected battery was studied. A quantitative hardware reliability analysis was performed to assess the effect of energy storage configuration to the overall system reliability. Instead of requiring the feedback control information of load power through a centralized supervisory controller, the power flow in the proposed HPS is managed by a decentralized control approach that takes advantage of the system architecture. Reliable system operation of an HPS is achieved through the proposed control approach by not requiring a separate supervisory controller. Furthermore, performance degradation of energy storage can be prevented by selecting the controller gains such that the charge rate does not exceed operational requirements. The performance of the proposed system architecture with the control strategy was verified by simulation results using realistic irradiance data and a battery model in which its temperature effect was considered. With an objective to support scalable operation, details on how the proposed design could be applied were also studied so that the HPS could satisfy potential system growth requirements. Such scalability was verified by simulating various cases that involve connection and disconnection of sources and loads. The

  10. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale

    Magnoni, L; Cordeiro, C; Georgiou, M; Andreeva, J; Suthakar, U; Khan, A; Smith, D R

    2015-01-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures. (paper)

  11. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    Magnoni, L.; Suthakar, U.; Cordeiro, C.; Georgiou, M.; Andreeva, J.; Khan, A.; Smith, D. R.

    2015-12-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures.

  12. Architecture Approach in System Development

    Ladislav Burita

    2017-01-01

    Full Text Available The purpose of this paper is to describe a practical solution of architecture approach in system development. The software application is the system which optimizes the transport service. The first part of the paper defines the enterprise architecture, its parts and frameworks. Next is explained the NATO Architecture Framework (NAF, a tool for command and control systems development in military environment. The NAF is used for architecture design of the system for optimization of the transport service.

  13. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    Magnoni, L; Cordeiro, C; Georgiou, M; Andreeva, J; Khan, A; Smith, D R

    2015-01-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities mon...

  14. A scalable single-chip multi-processor architecture with on-chip RTOS kernel

    Theelen, B.D.; Verschueren, A.C.; Reyes Suarez, V.V.; Stevens, M.P.J.; Nunez, A.

    2003-01-01

    Now that system-on-chip technology is emerging, single-chip multi-processors are becoming feasible. A key problem of designing such systems is the complexity of their on-chip interconnects and memory architecture. It is furthermore unclear at what level software should be integrated. An example of a

  15. A robust and scalable neuromorphic communication system by combining synaptic time multiplexing and MIMO-OFDM.

    Srinivasa, Narayan; Zhang, Deying; Grigorian, Beayna

    2014-03-01

    This paper describes a novel architecture for enabling robust and efficient neuromorphic communication. The architecture combines two concepts: 1) synaptic time multiplexing (STM) that trades space for speed of processing to create an intragroup communication approach that is firing rate independent and offers more flexibility in connectivity than cross-bar architectures and 2) a wired multiple input multiple output (MIMO) communication with orthogonal frequency division multiplexing (OFDM) techniques to enable a robust and efficient intergroup communication for neuromorphic systems. The MIMO-OFDM concept for the proposed architecture was analyzed by simulating large-scale spiking neural network architecture. Analysis shows that the neuromorphic system with MIMO-OFDM exhibits robust and efficient communication while operating in real time with a high bit rate. Through combining STM with MIMO-OFDM techniques, the resulting system offers a flexible and scalable connectivity as well as a power and area efficient solution for the implementation of very large-scale spiking neural architectures in hardware.

  16. Architectures Toward Reusable Science Data Systems

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  17. Performance and scalability analysis of teraflop-scale parallel architectures using multidimensional wavefront applications

    Hoisie, A.; Lubeck, O.; Wasserman, H.

    1998-01-01

    The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, they analyze two problem sizes. The model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor

  18. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  19. Superlinearly scalable noise robustness of redundant coupled dynamical systems.

    Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  20. Naval open systems architecture

    Guertin, Nick; Womble, Brian; Haskell, Virginia

    2013-05-01

    For the past 8 years, the Navy has been working on transforming the acquisition practices of the Navy and Marine Corps toward Open Systems Architectures to open up our business, gain competitive advantage, improve warfighter performance, speed innovation to the fleet and deliver superior capability to the warfighter within a shrinking budget1. Why should Industry care? They should care because we in Government want the best Industry has to offer. Industry is in the business of pushing technology to greater and greater capabilities through innovation. Examples of innovations are on full display at this conference, such as exploring the impact of difficult environmental conditions on technical performance. Industry is creating the tools which will continue to give the Navy and Marine Corps important tactical advantages over our adversaries.

  1. A customizable, scalable scheduling and reporting system.

    Wood, Jody L; Whitman, Beverly J; Mackley, Lisa A; Armstrong, Robert; Shotto, Robert T

    2014-06-01

    Scheduling is essential for running a facility smoothly and for summarizing activities in use reports. The Penn State Hershey Clinical Simulation Center has developed a scheduling interface that uses off-the-shelf components, with customizations that adapt to each institution's data collection and reporting needs. The system is designed using programs within the Microsoft Office 2010 suite. Outlook provides the scheduling component, while the reporting is performed using Access or Excel. An account with a calendar is created for the main schedule, with separate resource accounts created for each room within the center. The Outlook appointment form's 2 default tabs are used, in addition to a customized third tab. The data are then copied from the calendar into either a database table or a spreadsheet, where the reports are generated.Incorporating this system into an institution-wide structure allows integration of personnel lists and potentially enables all users to check the schedule from their desktop. Outlook also has a Web-based application for viewing the basic schedule from outside the institution, although customized data cannot be accessed. The scheduling and reporting functions have been used for a year at the Penn State Hershey Clinical Simulation Center. The schedule has increased workflow efficiency, improved the quality of recorded information, and provided more accurate reporting. The Penn State Hershey Clinical Simulation Center's scheduling and reporting system can be adapted easily to most simulation centers and can expand and change to meet future growth with little or no expense to the center.

  2. Towards Scalability for Federated Identity Systems for Cloud-Based Environments

    André Albino Pereira

    2015-12-01

    Full Text Available As multi-tenant authorization and federated identity management systems for cloud computing matures, the provisioning of services using this paradigm allows maximum efficiency on business that requires access control. However, regarding scalability support, mainly horizontal, some characteristics of those approaches based on central authentication protocols are problematic. The objective of this work is to address these issues by providing an adapted sticky-session mechanism for a Shibboleth architecture using JASIG CAS. This alternative, compared with the recommended distributed memory approach, shown improved efficiency and less overall infrastructure complexity, as well as demanding less 58% of computational resources and improving throughput (requests per second by 11%.

  3. Scalable Molecular Dynamics for Large Biomolecular Systems

    Robert K. Brunner

    2000-01-01

    Full Text Available We present an optimized parallelization scheme for molecular dynamics simulations of large biomolecular systems, implemented in the production-quality molecular dynamics program NAMD. With an object-based hybrid force and spatial decomposition scheme, and an aggressive measurement-based predictive load balancing framework, we have attained speeds and speedups that are much higher than any reported in literature so far. The paper first summarizes the broad methodology we are pursuing, and the basic parallelization scheme we used. It then describes the optimizations that were instrumental in increasing performance, and presents performance results on benchmark simulations.

  4. NPOESS System Architecture

    Hinnant, F.

    2009-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD and will provide continuity for the NASA Earth Observation System with the launch of the NPOESS Preparatory Project. This poster will provide a top level status update of the program, as well as an overview of the NPOESS system architecture, which includes four segments. The space segment includes satellites in two orbits that carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The NPOESS system design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users through a Command, Control, and Communication Segment (C3S). The data processing for NPOESS is accomplished through an Interface Data Processing Segment (IDPS)/Field Terminal Segment (FTS) that processes NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government as well as remote terminal users. The Launch Support Segment completes the four segments that make up the NPOESS system that will enhance the connectivity between research and operations and provide critical operational and scientific environmental measurements to military, civil, and scientific users until 2026.

  5. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  6. Level-2 Milestone 5588: Deliver Strategic Plan and Initial Scalability Assessment by Advanced Architecture and Portability Specialists Team

    Draeger, Erik W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-30

    This report documents the fact that the work in creating a strategic plan and beginning customer engagements has been completed. The description of milestone is: The newly formed advanced architecture and portability specialists (AAPS) team will develop a strategic plan to meet the goals of 1) sharing knowledge and experience with code teams to ensure that ASC codes run well on new architectures, and 2) supplying skilled computational scientists to put the strategy into practice. The plan will be delivered to ASC management in the first quarter. By the fourth quarter, the team will identify their first customers within PEM and IC, perform an initial assessment and scalability and performance bottleneck for next-generation architectures, and embed AAPS team members with customer code teams to assist with initial portability development within standalone kernels or proxy applications.

  7. Analysis of Architecture Pattern Usage in Legacy System Architecture Documentation

    Harrison, Neil B.; Avgeriou, Paris

    2008-01-01

    Architecture patterns are an important tool in architectural design. However, while many architecture patterns have been identified, there is little in-depth understanding of their actual use in software architectures. For instance, there is no overview of how many patterns are used per system or

  8. A scalable-low cost architecture for high gain beamforming antennas

    Bakr, Omar

    2010-10-01

    Many state-of-the-art wireless systems, such as long distance mesh networks and high bandwidth networks using mm-wave frequencies, require high gain antennas to overcome adverse channel conditions. These networks could be greatly aided by adaptive beamforming antenna arrays, which can significantly simplify the installation and maintenance costs (e.g., by enabling automatic beam alignment). However, building large, low cost beamforming arrays is very complicated. In this paper, we examine the main challenges presented by large arrays, starting from electromagnetic and antenna design and proceeding to the signal processing and algorithms domain. We propose 3-dimensional antenna structures and hybrid RF/digital radio architectures that can significantly reduce the complexity and improve the power efficiency of adaptive array systems. We also present signal processing techniques based on adaptive filtering methods that enhance the robustness of these architectures. Finally, we present computationally efficient vector quantization techniques that significantly improve the interference cancellation capabilities of analog beamforming architectures. © 2010 IEEE.

  9. A scalable-low cost architecture for high gain beamforming antennas

    Bakr, Omar; Johnson, Mark; Jungdong Park,; Adabi, Ehsan; Jones, Kevin; Niknejad, Ali

    2010-01-01

    Many state-of-the-art wireless systems, such as long distance mesh networks and high bandwidth networks using mm-wave frequencies, require high gain antennas to overcome adverse channel conditions. These networks could be greatly aided by adaptive beamforming antenna arrays, which can significantly simplify the installation and maintenance costs (e.g., by enabling automatic beam alignment). However, building large, low cost beamforming arrays is very complicated. In this paper, we examine the main challenges presented by large arrays, starting from electromagnetic and antenna design and proceeding to the signal processing and algorithms domain. We propose 3-dimensional antenna structures and hybrid RF/digital radio architectures that can significantly reduce the complexity and improve the power efficiency of adaptive array systems. We also present signal processing techniques based on adaptive filtering methods that enhance the robustness of these architectures. Finally, we present computationally efficient vector quantization techniques that significantly improve the interference cancellation capabilities of analog beamforming architectures. © 2010 IEEE.

  10. pcircle - A Suite of Scalable Parallel File System Tools

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  11. System architectures for telerobotic research

    Harrison, F. Wallace

    1989-01-01

    Several activities are performed related to the definition and creation of telerobotic systems. The effort and investment required to create architectures for these complex systems can be enormous; however, the magnitude of process can be reduced if structured design techniques are applied. A number of informal methodologies supporting certain aspects of the design process are available. More recently, prototypes of integrated tools supporting all phases of system design from requirements analysis to code generation and hardware layout have begun to appear. Activities related to system architecture of telerobots are described, including current activities which are designed to provide a methodology for the comparison and quantitative analysis of alternative system architectures.

  12. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Thomas E. Rosmond

    2000-01-01

    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  13. A distributed clinical decision support system architecture

    Shaker H. El-Sappagh

    2014-01-01

    Full Text Available This paper proposes an open and distributed clinical decision support system architecture. This technical architecture takes advantage of Electronic Health Record (EHR, data mining techniques, clinical databases, domain expert knowledge bases, available technologies and standards to provide decision-making support for healthcare professionals. The architecture will work extremely well in distributed EHR environments in which each hospital has its own local EHR, and it satisfies the compatibility, interoperability and scalability objectives of an EHR. The system will also have a set of distributed knowledge bases. Each knowledge base will be specialized in a specific domain (i.e., heart disease, and the model achieves cooperation, integration and interoperability between these knowledge bases. Moreover, the model ensures that all knowledge bases are up-to-date by connecting data mining engines to each local knowledge base. These data mining engines continuously mine EHR databases to extract the most recent knowledge, to standardize it and to add it to the knowledge bases. This framework is expected to improve the quality of healthcare, reducing medical errors and guaranteeing the safety of patients by helping clinicians to make correct, accurate, knowledgeable and timely decisions.

  14. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  15. The Node Monitoring Component of a Scalable Systems Software Environment

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  16. A scalable architecture for extracting, aligning, linking, and visualizing multi-Int data

    Knoblock, Craig A.; Szekely, Pedro

    2015-05-01

    An analyst today has a tremendous amount of data available, but each of the various data sources typically exists in their own silos, so an analyst has limited ability to see an integrated view of the data and has little or no access to contextual information that could help in understanding the data. We have developed the Domain-Insight Graph (DIG) system, an innovative architecture for extracting, aligning, linking, and visualizing massive amounts of domain-specific content from unstructured sources. Under the DARPA Memex program we have already successfully applied this architecture to multiple application domains, including the enormous international problem of human trafficking, where we extracted, aligned and linked data from 50 million online Web pages. DIG builds on our Karma data integration toolkit, which makes it easy to rapidly integrate structured data from a variety of sources, including databases, spreadsheets, XML, JSON, and Web services. The ability to integrate Web services allows Karma to pull in live data from the various social media sites, such as Twitter, Instagram, and OpenStreetMaps. DIG then indexes the integrated data and provides an easy to use interface for query, visualization, and analysis.

  17. Model-Based Evaluation Of System Scalability: Bandwidth Analysis For Smartphone-Based Biosensing Applications

    Patou, François; Madsen, Jan; Dimaki, Maria

    2016-01-01

    Scalability is a design principle often valued for the engineering of complex systems. Scalability is the ability of a system to change the current value of one of its specification parameters. Although targeted frameworks are available for the evaluation of scalability for specific digital systems...... re-engineering of 5 independent system modules, from the replacement of a wireless Bluetooth interface, to the revision of the ADC sample-and-hold operation could help increase system bandwidth....

  18. Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.

    Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas

    2017-07-24

    Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.

  19. Implementation of the Timepix ASIC in the Scalable Readout System

    Lupberger, M., E-mail: lupberger@physik.uni-bonn.de; Desch, K.; Kaminski, J.

    2016-09-11

    We report on the development of electronics hardware, FPGA firmware and software to provide a flexible multi-chip readout of the Timepix ASIC within the framework of the Scalable Readout System (SRS). The system features FPGA-based zero-suppression and the possibility to read out up to 4×8 chips with a single Front End Concentrator (FEC). By operating several FECs in parallel, in principle an arbitrary number of chips can be read out, exploiting the scaling features of SRS. Specifically, we tested the system with a setup consisting of 160 Timepix ASICs, operated as GridPix devices in a large TPC field cage in a 1 T magnetic field at a DESY test beam facility providing an electron beam of up to 6 GeV. We discuss the design choices, the dedicated hardware components, the FPGA firmware as well as the performance of the system in the test beam.

  20. Neuromorphic adaptive plastic scalable electronics: analog learning systems.

    Srinivasa, Narayan; Cruz-Albrecht, Jose

    2012-01-01

    Decades of research to build programmable intelligent machines have demonstrated limited utility in complex, real-world environments. Comparing their performance with biological systems, these machines are less efficient by a factor of 1 million1 billion in complex, real-world environments. The Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program is a multifaceted Defense Advanced Research Projects Agency (DARPA) project that seeks to break the programmable machine paradigm and define a new path for creating useful, intelligent machines. Since real-world systems exhibit infinite combinatorial complexity, electronic neuromorphic machine technology would be preferable in a host of applications, but useful and practical implementations still do not exist. HRL Laboratories LLC has embarked on addressing these challenges, and, in this article, we provide an overview of our project and progress made thus far.

  1. A Geo-Distributed System Architecture for Different Domains

    Moßgraber, Jürgen; Middleton, Stuart; Tao, Ran

    2013-04-01

    The presentation will describe work on the system-of-systems (SoS) architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". In this project we deal with two use-cases: Natural Crisis Management (e.g. Tsunami Early Warning) and Industrial Subsurface Development (e.g. drilling for oil). These use-cases seem to be quite different at first sight but share a lot of similarities, like managing and looking up available sensors, extracting data from them and annotate it semantically, intelligently manage the data (big data problem), run mathematical analysis algorithms on the data and finally provide decision support on this basis. The main challenge was to create a generic architecture which fits both use-cases. The requirements to the architecture are manifold and the whole spectrum of a modern, geo-distributed and collaborative system comes into play. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. The most important architectural challenges we needed to address are 1. Build a scalable communication layer for a System-of-sytems 2. Build a resilient communication layer for a System-of-sytems 3. Efficiently publish large volumes of semantically rich sensor data 4. Scalable and high performance storage of large distributed datasets 5. Handling federated multi-domain heterogeneous data 6. Discovery of resources in a geo-distributed SoS 7. Coordination of work between geo-distributed systems The design decisions made for each of them will be presented. These developed concepts are also applicable to the requirements of the Future Internet (FI) and Internet of Things (IoT) which will provide services like smart grids, smart metering, logistics and

  2. Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems

    Choudhary, Alok [Northwestern Univ., Evanston, IL (United States); Samatova, Nagiza [North Carolina State Univ., Raleigh, NC (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Liao, Wei-keng [Northwestern Univ., Evanston, IL (United States)

    2015-03-19

    This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.

  3. A Scalable Multicore Architecture With Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs).

    Moradi, Saber; Qiao, Ning; Stefanini, Fabio; Indiveri, Giacomo

    2018-02-01

    Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here, we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multicore neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed.

  4. A Scalable Data Integration and Analysis Architecture for Sensor Data of Pediatric Asthma.

    Stripelis, Dimitris; Ambite, José Luis; Chiang, Yao-Yi; Eckel, Sandrah P; Habre, Rima

    2017-04-01

    According to the Centers for Disease Control, in the United States there are 6.8 million children living with asthma. Despite the importance of the disease, the available prognostic tools are not sufficient for biomedical researchers to thoroughly investigate the potential risks of the disease at scale. To overcome these challenges we present a big data integration and analysis infrastructure developed by our Data and Software Coordination and Integration Center (DSCIC) of the NIBIB-funded Pediatric Research using Integrated Sensor Monitoring Systems (PRISMS) program. Our goal is to help biomedical researchers to efficiently predict and prevent asthma attacks. The PRISMS-DSCIC is responsible for collecting, integrating, storing, and analyzing real-time environmental, physiological and behavioral data obtained from heterogeneous sensor and traditional data sources. Our architecture is based on the Apache Kafka, Spark and Hadoop frameworks and PostgreSQL DBMS. A main contribution of this work is extending the Spark framework with a mediation layer, based on logical schema mappings and query rewriting, to facilitate data analysis over a consistent harmonized schema. The system provides both batch and stream analytic capabilities over the massive data generated by wearable and fixed sensors.

  5. Developing Distributed System With Service Resource Oriented Architecture

    Hermawan Hermawan

    2012-06-01

    Full Text Available Service Oriented Architecture is a design paradigm in software engineering with which a distributed system is built for an enterprise. This paradigm aims at providing the system as a service through a protocol in web service technology, namely Simple Object Access Protocol (SOAP. However, SOA is service level agreements of webservice. For this reason, this reasearch aims at combining SOA with Resource Oriented Architecture in order to expand scalability of services. This combination creates Sevice Resource Oriented Architecture (SROA with which a distributed system is developed that integrates services within project management software. Following this design, the software is developed according to a framework of Agile Model Driven Development which can reduce complexities of the whole process of software development.

  6. System architecture with XML

    Daum, Berthold

    2002-01-01

    XML is bringing together some fairly disparate groups into a new cultural clash: document developers trying to understand what a transaction is, database analysts getting upset because the relational model doesn''t fit anymore, and web designers having to deal with schemata and rule based transformations. The key to rising above the confusion is to understand the different semantic structures that lie beneath the standards of XML, and how to model the semantics to achieve the goals of the organization. A pure architecture of XML doesn''t exist yet, and it may never exist as the underlying technologies are so diverse. Still, the key to understanding how to build the new web infrastructure for electronic business lies in understanding the landscape of these new standards.If your background is in document processing, this book will show how you can use conceptual modeling to model business scenarios consisting of business objects, relationships, processes, and transactions in a document-centric way. Database des...

  7. Tree-based server-middleman-client architecture: improving scalability and reliability for voting-based network games in ad hoc wireless networks

    Guo, Y.; Fujinoki, H.

    2006-10-01

    The concept of a new tree-based architecture for networked multi-player games was proposed by Matuszek to improve scalability in network traffic at the same time to improve reliability. The architecture (we refer it as "Tree-Based Server- Middlemen-Client architecture") will solve the two major problems in ad-hoc wireless networks: frequent link failures and significance in battery power consumption at wireless transceivers by using two new techniques, recursive aggregation of client messages and subscription-based propagation of game state. However, the performance of the TBSMC architecture has never been quantitatively studied. In this paper, the TB-SMC architecture is compared with the client-server architecture using simulation experiments. We developed an event driven simulator to evaluate the performance of the TB-SMC architecture. In the network traffic scalability experiments, the TB-SMC architecture resulted in less than 1/14 of the network traffic load for 200 end users. In the reliability experiments, the TB-SMC architecture improved the number of successfully delivered players' votes by 31.6, 19.0, and 12.4% from the clientserver architecture at high (failure probability of 90%), moderate (50%) and low (10%) failure probability.

  8. Detailed Modeling and Evaluation of a Scalable Multilevel Checkpointing System

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moody, Adam [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bronevetsky, Greg [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); de Supinski, Bronis R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-01

    High-performance computing (HPC) systems are growing more powerful by utilizing more components. As the system mean time before failure correspondingly drops, applications must checkpoint frequently to make progress. But, at scale, the cost of checkpointing becomes prohibitive. A solution to this problem is multilevel checkpointing, which employs multiple types of checkpoints in a single run. Moreover, lightweight checkpoints can handle the most common failure modes, while more expensive checkpoints can handle severe failures. We designed a multilevel checkpointing library, the Scalable Checkpoint/Restart (SCR) library, that writes lightweight checkpoints to node-local storage in addition to the parallel file system. We present probabilistic Markov models of SCR's performance. We show that on future large-scale systems, SCR can lead to a gain in machine efficiency of up to 35 percent, and reduce the load on the parallel file system by a factor of two. In addition, we predict that checkpoint scavenging, or only writing checkpoints to the parallel file system on application termination, can reduce the load on the parallel file system by 20 × on today's systems and still maintain high application efficiency.

  9. Architecture Descriptions. A Contribution to Modeling of Production System Architecture

    Jepsen, Allan Dam; Hvam, Lars

    a proper understanding of the architecture phenomenon and the ability to describe it in a manner that allow the architecture to be communicated to and handled by stakeholders throughout the company. Despite the existence of several design philosophies in production system design such as Lean, that focus...... a diverse set of stakeholder domains and tools in the production system life cycle. To support such activities, a contribution is made to the identification and referencing of production system elements within architecture descriptions as part of the reference architecture framework. The contribution...

  10. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  11. Evaluating Sparse Linear System Solvers on Scalable Parallel Architectures

    Grama, Ananth; Manguoglu, Murat; Koyuturk, Mehmet; Naumov, Maxim; Sameh, Ahmed

    2008-01-01

    .... The study was motivated primarily by the lack of robustness of Krylov subspace iterative schemes with generic, black-box, pre-conditioners such as approximate (or incomplete) LU-factorizations...

  12. System structures in architecture

    Vibæk, Kasper Sánchez

    2012-01-01

    Afhandlingen introducerer begrebet systemstruktur i den arkitektoniske designproces som en måde at indskyde et systemniveau i arkitektur og byggeri, der ligger mellem generel byggeteknik og specifikke arkitektoniske resultater. For at operationalisere en sådan systemstruktur udarbejdes en systems...

  13. TDCCREC: AN EFFICIENT AND SCALABLE WEB-BASED RECOMMENDATION SYSTEM

    K.Latha

    2010-10-01

    Full Text Available Web browsers are provided with complex information space where the volume of information available to them is huge. There comes the Recommender system which effectively recommends web pages that are related to the current webpage, to provide the user with further customized reading material. To enhance the performance of the recommender systems, we include an elegant proposed web based recommendation system; Truth Discovery based Content and Collaborative RECommender (TDCCREC which is capable of addressing scalability. Existing approaches such as Learning automata deals with usage and navigational patterns of users. On the other hand, Weighted Association Rule is applied for recommending web pages by assigning weights to each page in all the transactions. Both of them have their own disadvantages. The websites recommended by the search engines have no guarantee for information correctness and often delivers conflicting information. To solve them, content based filtering and collaborative filtering techniques are introduced for recommending web pages to the active user along with the trustworthiness of the website and confidence of facts which outperforms the existing methods. Our results show how the proposed recommender system performs better in predicting the next request of web users.

  14. Electrical system architecture

    Algrain, Marcelo C [Peoria, IL; Johnson, Kris W [Washington, IL; Akasam, Sivaprasad [Peoria, IL; Hoff, Brian D [East Peoria, IL

    2008-07-15

    An electrical system for a vehicle includes a first power source generating a first voltage level, the first power source being in electrical communication with a first bus. A second power source generates a second voltage level greater than the first voltage level, the second power source being in electrical communication with a second bus. A starter generator may be configured to provide power to at least one of the first bus and the second bus, and at least one additional power source may be configured to provide power to at least one of the first bus and the second bus. The electrical system also includes at least one power consumer in electrical communication with the first bus and at least one power consumer in electrical communication with the second bus.

  15. A~Scalable~Data~Taking~System at~a~Test~Beam~for~LHC

    2002-01-01

    % RD-13 A Scalable Data Taking System at a Test Beam for LHC \\\\ \\\\We have installed a test beam read-out facility for the simultaneous test of LHC detectors, trigger and read-out electronics, together with the development of the supporting architecture in a multiprocessor environment. The aim of the project is to build a system which incorporates all the functionality of a complete read-out chain. Emphasis is put on a highly modular design, such that new hardware and software developments can be conveniently introduced. Exploiting this modularity, the set-up will evolve driven by progress in technologies and new software developments. \\\\ \\\\One of the main thrusts of the project is modelling and integration of different read-out architectures to provide a valuable training ground for new techniques. To address these aspects in a realistic manner, we collaborate with detector R\\&D projects in order to test higher level trigger systems, event building and high rate data transfers, once the techniques involve...

  16. Integrating hospital information systems in healthcare institutions: a mediation architecture.

    El Azami, Ikram; Cherkaoui Malki, Mohammed Ouçamah; Tahon, Christian

    2012-10-01

    Many studies have examined the integration of information systems into healthcare institutions, leading to several standards in the healthcare domain (CORBAmed: Common Object Request Broker Architecture in Medicine; HL7: Health Level Seven International; DICOM: Digital Imaging and Communications in Medicine; and IHE: Integrating the Healthcare Enterprise). Due to the existence of a wide diversity of heterogeneous systems, three essential factors are necessary to fully integrate a system: data, functions and workflow. However, most of the previous studies have dealt with only one or two of these factors and this makes the system integration unsatisfactory. In this paper, we propose a flexible, scalable architecture for Hospital Information Systems (HIS). Our main purpose is to provide a practical solution to insure HIS interoperability so that healthcare institutions can communicate without being obliged to change their local information systems and without altering the tasks of the healthcare professionals. Our architecture is a mediation architecture with 3 levels: 1) a database level, 2) a middleware level and 3) a user interface level. The mediation is based on two central components: the Mediator and the Adapter. Using the XML format allows us to establish a structured, secured exchange of healthcare data. The notion of medical ontology is introduced to solve semantic conflicts and to unify the language used for the exchange. Our mediation architecture provides an effective, promising model that promotes the integration of hospital information systems that are autonomous, heterogeneous, semantically interoperable and platform-independent.

  17. Web Based System Architecture for Long Pulse Remote Experimentation

    De Las Heras, E.; Lastra, D. [INDRA Sistemas, S.A., Unidad de Sistemas de Control, Madrid (Spain); Vega, J.; Castro, R. [Association Euratom CIEMAT for Fusion, Madrid (Spain); Ruiz, M.; Barrera, E. [Universidad Politecnica de Madrid (Spain)

    2009-07-01

    INDRA is the first Information Technology company in Spain and it presents here, through a series of transparencies, its own approach for the remote experimentation architecture for long pulses (REAL). All the architecture is based on Java-2 platform standards and REAL is a totally open architecture. By itself REAL offers significant advantages: -) access authentication and authorization under multiple security implementations, -) local or remote network access: LAN, WAN, VPN..., -) on-line access to acquisition systems for monitoring and configuration, -) scalability, flexibility, robustness, platform independence,.... The BeansNet implementation of REAL gives additional good things such as: -) easy implementation, -) graphical tool for service composition and configuration, -) availability and hot-swap (no need of stopping or restarting services after update or remodeling, and -) INDRA support. The implementation of BeansNet at the TJ-2 stellarator at Ciemat is presented. This document is made of the presentation transparencies. (A.C.)

  18. Upgrade of the TOTEM DAQ using the Scalable Readout System (SRS)

    Quinto, M; Cafagna, F; Fiergolski, A; Radicioni, E

    2013-01-01

    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. At LHC, collisions are produced at a rate of 40 MHz, imposing strong requirements for the Data Acquisition Systems (DAQ) in terms of trigger rate and data throughput. The TOTEM DAQ adopts a modular approach that, in standalone mode, is based on VME bus system. The VME based Front End Driver (FED) modules, host mezzanines that receive data through optical fibres directly from the detectors. After data checks and formatting are applied in the mezzanine, data is retransmitted to the VME interface and to another mezzanine card plugged in the FED module. The VME bus maximum bandwidth limits the maximum first level trigger (L1A) to 1 kHz rate. In order to get rid of the VME bottleneck and improve scalability and the overall capabilities of the DAQ, a new system was designed and constructed based on the Scalable Readout System (SRS), developed in the framework of the RD51 Collaboration. The project aims to increase the efficiency of the actual readout system providing higher bandwidth, and increasing data filtering, implementing a second-level trigger event selection based on hardware pattern recognition algorithms. This goal is to be achieved preserving the maximum back compatibility with the LHC Timing, Trigger and Control (TTC) system as well as with the CMS DAQ. The obtained results and the perspectives of the project are reported. In particular, we describe the system architecture and the new Opto-FEC adapter card developed to connect the SRS with the FED mezzanine modules. A first test bench was built and validated during the last TOTEM data taking period (February 2013). Readout of a set of 3 TOTEM Roman Pot silicon detectors was carried out to verify performance in the real LHC environment. In addition, the test allowed a check of data consistency and quality

  19. Towards Reliable, Scalable, and Energy Efficient Cognitive Radio Systems

    Sboui, Lokman

    2017-11-01

    The cognitive radio (CR) concept is expected to be adopted along with many technologies to meet the requirements of the next generation of wireless and mobile systems, the 5G. Consequently, it is important to determine the performance of the CR systems with respect to these requirements. In this thesis, after briefly describing the 5G requirements, we present three main directions in which we aim to enhance the CR performance. The first direction is the reliability. We study the achievable rate of a multiple-input multiple-output (MIMO) relay-assisted CR under two scenarios; an unmanned aerial vehicle (UAV) one-way relaying (OWR) and a fixed two-way relaying (TWR). We propose special linear precoding schemes that enable the secondary user (SU) to take advantage of the primary-free channel eigenmodes. We study the SU rate sensitivity to the relay power, the relay gain, the UAV altitude, the number of antennas and the line of sight availability. The second direction is the scalability. We first study a multiple access channel (MAC) with multiple SUs scenario. We propose a particular linear precoding and SUs selection scheme maximizing their sum-rate. We show that the proposed scheme provides a significant sum-rate improvement as the number of SUs increases. Secondly, we expand our scalability study to cognitive cellular networks. We propose a low-complexity algorithm for base station activation/deactivation and dynamic spectrum management maximizing the profits of primary and secondary networks subject to green constraints. We show that our proposed algorithms achieve performance close to those obtained with the exhaustive search method. The third direction is the energy efficiency (EE). We present a novel power allocation scheme based on maximizing the EE of both single-input and single-output (SISO) and MIMO systems. We solve a non-convex problem and derive explicit expressions of the corresponding optimal power. When the instantaneous channel is not available, we

  20. Performance and scalability of the back-end sub-system in the ATLAS DAQ/EF prototype

    Alexandrov, I N; Badescu, E; Burckhart, Doris; Caprini, M; Cohen, L; Duval, P Y; Hart, R; Jones, R; Kazarov, A; Kolos, S; Kotov, V; Laugier, D; Mapelli, Livio P; Moneta, L; Qian, Z; Radu, A A; Ribeiro, C A; Roumiantsev, V; Ryabov, Yu; Schweiger, D; Soloviev, I V

    2000-01-01

    The DAQ group of the future ATLAS experiment has developed a prototype system based on the trigger/DAQ architecture described in the ATLAS Technical Proposal to support studies of the full system functionality, architecture as well as available hardware and software technologies. One sub-system of this prototype is the back- end which encompasses the software needed to configure, control and monitor the DAQ, but excludes the processing and transportation of physics data. The back-end consists of a number of components including run control, configuration databases and message reporting system. The software has been developed using standard, external software technologies such as OO databases and CORBA. It has been ported to several C++ compilers and operating systems including Solaris, Linux, WNT and LynxOS. This paper gives an overview of the back-end software, its performance, scalability and current status. (17 refs).

  1. A flexible software architecture for scalable real-time image and video processing applications

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  2. Scalable Architecture for Personalized Healthcare Service Recommendation using Big Data Lake

    Rangarajan, Sarathkumar; Liu, Huai; Wang, Hua; Wang, Chuan-Long

    2018-01-01

    The personalized health care service utilizes the relational patient data and big data analytics to tailor the medication recommendations. However, most of the health care data are in unstructured form and it consumes a lot of time and effort to pull them into relational form. This study proposes a novel data lake architecture to reduce the data ingestion time and improve the precision of healthcare analytics. It also removes the data silos and enhances the analytics by allowing the connectiv...

  3. A compact linear accelerator based on a scalable microelectromechanical-system RF-structure

    Persaud, A.; Ji, Q.; Feinberg, E.; Seidl, P. A.; Waldron, W. L.; Schenkel, T.; Lal, A.; Vinayakumar, K. B.; Ardanuc, S.; Hammer, D. A.

    2017-06-01

    A new approach for a compact radio-frequency (RF) accelerator structure is presented. The new accelerator architecture is based on the Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) structure that was first developed in the 1980s. The MEQALAC utilized RF resonators producing the accelerating fields and providing for higher beam currents through parallel beamlets focused using arrays of electrostatic quadrupoles (ESQs). While the early work obtained ESQs with lateral dimensions on the order of a few centimeters, using a printed circuit board (PCB), we reduce the characteristic dimension to the millimeter regime, while massively scaling up the potential number of parallel beamlets. Using Microelectromechanical systems scalable fabrication approaches, we are working on further reducing the characteristic dimension to the sub-millimeter regime. The technology is based on RF-acceleration components and ESQs implemented in the PCB or silicon wafers where each beamlet passes through beam apertures in the wafer. The complete accelerator is then assembled by stacking these wafers. This approach has the potential for fast and inexpensive batch fabrication of the components and flexibility in system design for application specific beam energies and currents. For prototyping the accelerator architecture, the components have been fabricated using the PCB. In this paper, we present proof of concept results of the principal components using the PCB: RF acceleration and ESQ focusing. Ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies are discussed.

  4. A compact linear accelerator based on a scalable microelectromechanical-system RF-structure.

    Persaud, A; Ji, Q; Feinberg, E; Seidl, P A; Waldron, W L; Schenkel, T; Lal, A; Vinayakumar, K B; Ardanuc, S; Hammer, D A

    2017-06-01

    A new approach for a compact radio-frequency (RF) accelerator structure is presented. The new accelerator architecture is based on the Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) structure that was first developed in the 1980s. The MEQALAC utilized RF resonators producing the accelerating fields and providing for higher beam currents through parallel beamlets focused using arrays of electrostatic quadrupoles (ESQs). While the early work obtained ESQs with lateral dimensions on the order of a few centimeters, using a printed circuit board (PCB), we reduce the characteristic dimension to the millimeter regime, while massively scaling up the potential number of parallel beamlets. Using Microelectromechanical systems scalable fabrication approaches, we are working on further reducing the characteristic dimension to the sub-millimeter regime. The technology is based on RF-acceleration components and ESQs implemented in the PCB or silicon wafers where each beamlet passes through beam apertures in the wafer. The complete accelerator is then assembled by stacking these wafers. This approach has the potential for fast and inexpensive batch fabrication of the components and flexibility in system design for application specific beam energies and currents. For prototyping the accelerator architecture, the components have been fabricated using the PCB. In this paper, we present proof of concept results of the principal components using the PCB: RF acceleration and ESQ focusing. Ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies are discussed.

  5. Cloud-Based Architectures for Auto-Scalable Web Geoportals towards the Cloudification of the GeoVITe Swiss Academic Geoportal

    Ionuț Iosifescu-Enescu

    2017-06-01

    Full Text Available Cloud computing has redefined the way in which Spatial Data Infrastructures (SDI and Web geoportals are designed, managed, and maintained. The cloudification of a geoportal represents the migration of a full-stack geoportal application to an internet-based private or public cloud. This work introduces two generic and open cloud-based architectures for auto-scalable Web geoportals, illustrated with the use case of the cloudification efforts of the Swiss academic geoportal GeoVITe. The presented cloud-based architectural designs for auto-scalable Web geoportals consider the most important functional and non-functional requirements and are adapted to both public and private clouds. The availability of such generic cloud-based architectures advances the cloudification of academic SDIs and geoportals.

  6. Scalable shared-memory multiprocessing

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  7. Architectural Analysis of Dynamically Reconfigurable Systems

    Lindvall, Mikael; Godfrey, Sally; Ackermann, Chris; Ray, Arnab; Yonkwa, Lyly

    2010-01-01

    oTpics include: the problem (increased flexibility of architectural styles decrease analyzability, behavior emerges and varies depending on the configuration, does the resulting system run according to the intended design, and architectural decisions can impede or facilitate testing); top down approach to architecture analysis, detection of defects and deviations, and architecture and its testability; currently targeted projects GMSEC and CFS; analyzing software architectures; analyzing runtime events; actual architecture recognition; GMPUB in Dynamic SAVE; sample output from new approach; taking message timing delays into account; CFS examples of architecture and testability; some recommendations for improved testablity; and CFS examples of abstract interfaces and testability; CFS example of opening some internal details.

  8. Scalable devices

    Krü ger, Jens J.; Hadwiger, Markus

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales

  9. System architecture for microprocessor based protection system

    Gallagher, J.M. Jr.; Lilly, G.M.

    1976-01-01

    This paper discusses the architectural design features to be employed by Westinghouse in the application of distributed digital processing techniques to the protection system. While the title of the paper makes specific reference to microprocessors, this is only one (and the newest) of the building blocks which constitutes a distributed digital processing system. The actual system structure (as realized through utilization of the various building blocks) is established through considerations of reliability, licensability, and cost. It is the intent of the paper to address these considerations licenstions as they relate to the architectural design features. (orig.) [de

  10. Smart House Interconnected System Architecture

    ALBU Răzvan-Daniel

    2017-05-01

    Full Text Available In this research work we will present the architecture of an intelligent house system capable to detect accidents cause by floods, gas, and to protect against unauthorized access or burglary. Our system is not just an alarm, it continuously monitors the house and reports over internet its state. Most of the current smart house systems available on the market alarms the user via email or SMS when an unwanted event happens. Thus, the user assumes that the house is not affected if an alarm message is not received. This is not always true, since the monitoring system components can also damage, or the entire system can become unable to send an alarm message even if it detects an unwanted event. This article presents also details about both hardware and software implementation.

  11. A scalable architecture for incremental specification and maintenance of procedural and declarative clinical decision-support knowledge.

    Hatsek, Avner; Shahar, Yuval; Taieb-Maimon, Meirav; Shalom, Erez; Klimov, Denis; Lunenfeld, Eitan

    2010-01-01

    Clinical guidelines have been shown to improve the quality of medical care and to reduce its costs. However, most guidelines exist in a free-text representation and, without automation, are not sufficiently accessible to clinicians at the point of care. A prerequisite for automated guideline application is a machine-comprehensible representation of the guidelines. In this study, we designed and implemented a scalable architecture to support medical experts and knowledge engineers in specifying and maintaining the procedural and declarative aspects of clinical guideline knowledge, resulting in a machine comprehensible representation. The new framework significantly extends our previous work on the Digital electronic Guidelines Library (DeGeL) The current study designed and implemented a graphical framework for specification of declarative and procedural clinical knowledge, Gesher. We performed three different experiments to evaluate the functionality and usability of the major aspects of the new framework: Specification of procedural clinical knowledge, specification of declarative clinical knowledge, and exploration of a given clinical guideline. The subjects included clinicians and knowledge engineers (overall, 27 participants). The evaluations indicated high levels of completeness and correctness of the guideline specification process by both the clinicians and the knowledge engineers, although the best results, in the case of declarative-knowledge specification, were achieved by teams including a clinician and a knowledge engineer. The usability scores were high as well, although the clinicians' assessment was significantly lower than the assessment of the knowledge engineers.

  12. A Systems Engineering Approach to Architecture Development

    Di Pietro, David A.

    2015-01-01

    Architecture development is often conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this paper characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles

  13. A Scalable Unsegmented Multiport Memory for FPGA-Based Systems

    Kevin R. Townsend

    2015-01-01

    Full Text Available On-chip multiport memory cores are crucial primitives for many modern high-performance reconfigurable architectures and multicore systems. Previous approaches for scaling memory cores come at the cost of operating frequency, communication overhead, and logic resources without increasing the storage capacity of the memory. In this paper, we present two approaches for designing multiport memory cores that are suitable for reconfigurable accelerators with substantial on-chip memory or complex communication. Our design approaches tackle these challenges by banking RAM blocks and utilizing interconnect networks which allows scaling without sacrificing logic resources. With banking, memory congestion is unavoidable and we evaluate our multiport memory cores under different memory access patterns to gain insights about different design trade-offs. We demonstrate our implementation with up to 256 memory ports using a Xilinx Virtex-7 FPGA. Our experimental results report high throughput memories with resource usage that scales with the number of ports.

  14. Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics

    2017-04-19

    research were used to implement a distributed on-demand video analytics system that was prototyped for the use of forensics investigators in law...demand video intelligence; intelligent video system ; video analytics platform I. INTRODUCTION Video Analytics systems has been of tremendous interest...enforcement. The system was tested in the wild using video files as well as a commercial Video Management System supporting more than 100 surveillance

  15. MBAT: A scalable informatics system for unifying digital atlasing workflows

    Sane Nikhil

    2010-12-01

    Full Text Available Abstract Background Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. Results The MouseBIRN Atlasing Toolkit (MBAT project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. Conclusions MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context

  16. The Front-End Concentrator card for the RD51 Scalable Readout System

    Toledo, J; Esteve, R; Monzó, J M; Tarazona, A; Muller, H; Martoiu, S

    2011-01-01

    Conventional readout systems exist in many variants since the usual approach is to build readout electronics for one given type of detector. The Scalable Readout System (SRS) developed within the RD51 collaboration relaxes this situation considerably by providing a choice of frontends which are connected over a customizable interface to a common SRS DAQ architecture. This allows sharing development and production costs among a large base of users as well as support from a wide base of developers. The Front-end Concentrator card (FEC), a RD51 common project between CERN and the NEXT Collaboration, is a reconfigurable interface between the SRS online system and a wide range of frontends. This is accomplished by using application-specific adapter cards between the FEC and the frontends. The ensemble (FEC and adapter card are edge mounted) forms a 6U × 220 mm Eurocard combo that fits on a 19'' subchassis. Adapter cards exist already for the first applications and more are in development.

  17. Scalable and near-optimal design space exploration for embedded systems

    Kritikakou, Angeliki; Goutis, Costas

    2014-01-01

    This book describes scalable and near-optimal, processor-level design space exploration (DSE) methodologies.  The authors present design methodologies for data storage and processing in real-time, cost-sensitive data-dominated embedded systems.  Readers will be enabled to reduce time-to-market, while satisfying system requirements for performance, area, and energy consumption, thereby minimizing the overall cost of the final design.   • Describes design space exploration (DSE) methodologies for data storage and processing in embedded systems, which achieve near-optimal solutions with scalable exploration time; • Presents a set of principles and the processes which support the development of the proposed scalable and near-optimal methodologies; • Enables readers to apply scalable and near-optimal methodologies to the intra-signal in-place optimization step for both regular and irregular memory accesses.

  18. Final Scientific/Technical Report for "Enabling Exascale Hardware and Software Design through Scalable System Virtualization"

    Dinda, Peter August [Northwestern Univ., Evanston, IL (United States)

    2015-03-17

    This report describes the activities, findings, and products of the Northwestern University component of the "Enabling Exascale Hardware and Software Design through Scalable System Virtualization" project. The purpose of this project has been to extend the state of the art of systems software for high-end computing (HEC) platforms, and to use systems software to better enable the evaluation of potential future HEC platforms, for example exascale platforms. Such platforms, and their systems software, have the goal of providing scientific computation at new scales, thus enabling new research in the physical sciences and engineering. Over time, the innovations in systems software for such platforms also become applicable to more widely used computing clusters, data centers, and clouds. This was a five-institution project, centered on the Palacios virtual machine monitor (VMM) systems software, a project begun at Northwestern, and originally developed in a previous collaboration between Northwestern University and the University of New Mexico. In this project, Northwestern (including via our subcontract to the University of Pittsburgh) contributed to the continued development of Palacios, along with other team members. We took the leadership role in (1) continued extension of support for emerging Intel and AMD hardware, (2) integration and performance enhancement of overlay networking, (3) connectivity with architectural simulation, (4) binary translation, and (5) support for modern Non-Uniform Memory Access (NUMA) hosts and guests. We also took a supporting role in support for specialized hardware for I/O virtualization, profiling, configurability, and integration with configuration tools. The efforts we led (1-5) were largely successful and executed as expected, with code and papers resulting from them. The project demonstrated the feasibility of a virtualization layer for HEC computing, similar to such layers for cloud or datacenter computing. For effort (3

  19. Tank waste remediation system architecture tree

    PECK, L.G.

    1999-01-01

    The TWRS Architecture Tree presented in this document is a hierarchical breakdown to support the TWRS systems engineering analysis of the TWRS physical system, including facilities, hardware and software. The purpose for this systems engineering architecture tree is to describe and communicate the system's selected and existing architecture, to provide a common structure to improve the integration of work and resulting products, and to provide a framework as a basis for TWRS Specification Tree development

  20. Tank waste remediation system architecture tree; TOPICAL

    PECK, L.G.

    1999-01-01

    The TWRS Architecture Tree presented in this document is a hierarchical breakdown to support the TWRS systems engineering analysis of the TWRS physical system, including facilities, hardware and software. The purpose for this systems engineering architecture tree is to describe and communicate the system's selected and existing architecture, to provide a common structure to improve the integration of work and resulting products, and to provide a framework as a basis for TWRS Specification Tree development

  1. Information Systems for Enterprise Architecture

    Oswaldo Moscoso Zea

    2014-03-01

    Full Text Available (Received: 2014/02/14 - Accepted: 2014/03/25Enterprise Architecture (EA has emerged as one of the most important topics to consider in Information System studies and has grown to become an essential business management activity to visualize and evaluate the future direction of a company. Nowadays in the market there are several software tools that support Enterprise Architects to work with EA. In order to decrease the risk of purchasing software tools that do not fulfill stakeholder´s needs is important to assess the software before making an investment. In this paper a literature review of the state of the art of EA will be done. Furthermore evaluation initiatives and existing information systems are analyzed which can support decision makers in the appropriate software tools for their companies.

  2. The architecture of LAMOST observatory control system

    Wang Jian; Jin Ge; Yu Xiaoqi; Wan Changsheng; Hao Likai; Li Xihua

    2005-01-01

    The design of architecture is the one of the most important part in development of Observatory Control System (OCS) for LAMOST. Based on the complexity of LAMOST, long time of development for LAMOST and long life-cycle of OCS system, referring many kinds of architecture pattern, the architecture of OCS is established which is a component-based layered system using many patterns such as the MVC and proxy. (authors)

  3. Design of a Scalable Modular Production System for a Two-stage Food Service Franchise System

    Matt,; T., D.; Rauch,; E.,

    2012-01-01

    The geographically distributed production of fresh food poses unique challenges to the production system design because of their stringent industry and logistics requirements. The purpose of this research is to examine the case of a European fresh food manufacturer’s approach to introduce a scalable modular production concept for an international two‐stage gastronomy franchise system in order to identify best practice guidelines and to derive a framework for the design of distributed producti...

  4. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A. [Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California 90095 (United States); Chen, Q. [Department of Radiation Oncology, University of Virginia, 1300 Jefferson Park Avenue, Charlottesville, California 22908 (United States)

    2014-10-15

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria

  5. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    Neylon, J.; Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.

    2014-01-01

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria

  6. A highly scalable peptide-based assay system for proteomics.

    Igor A Kozlov

    Full Text Available We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays.

  7. Object oriented business architecture on online-exam and assignment system

    Haji-Zada, Teymur

    2013-01-01

    ABSTRACT: Business object architecture is a technology that was designed and developed during recent period. This architecture has a lot of benefits like scalability, flexibility and security. It helps create and develop maintainable, secure and reusable applications for further development. In business object architecture the logical architecture is separated into layers that give more scalability and reusability. Also using business object architecture developers must not write different pr...

  8. Sustainable, Reliable Mission-Systems Architecture

    O'Neil, Graham; Orr, James K.; Watson, Steve

    2007-01-01

    A mission-systems architecture, based on a highly modular infrastructure utilizing: open-standards hardware and software interfaces as the enabling technology is essential for affordable and sustainable space exploration programs. This mission-systems architecture requires (a) robust communication between heterogeneous system, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered system are applied to define the model. Technology projections reaching out 5 years are mde to refine model details.

  9. Evaluation of a Modular PET System Architecture with Synchronization over Data Links

    Aliaga Varea, Ramón José; Herrero Bosch, Vicente; Monzó Ferrer, José María; Ros García, Ana; Gadea Gironés, Rafael; Colom Palero, Ricardo José

    2014-01-01

    A DAQ architecture for a PET system is presented that focuses on modularity, scalability and reusability. The system defines two basic building blocks: data acquisitors and concentra- tors, which can be replicated in order to build a complete DAQ of variable size. Acquisition modules contain a scintillating crystal and either a position-sensitive photomultiplier (PSPMT) or an array of silicon photomultipliers (SiPM). The detector signals are processed by AMIC, an integrated analog front-end t...

  10. Achieving Critical System Survivability Through Software Architectures

    Knight, John C; Strunk, Elisabeth A

    2006-01-01

    .... In a system with a survivability architecture, under adverse conditions such as system damage or software failures, some desirable function will be eliminated but critical services will be retained...

  11. An Enterprise Information System Data Architecture Guide

    Lewis, Grace

    2001-01-01

    Data architecture defines how data is stored, managed, and used in a system. It establishes common guidelines for data operations that make it impossible to predict, model, gauge, or control the flow of data in the system...

  12. Open System Architecture design for planet surface systems

    Petri, D. A.; Pieniazek, L. A.; Toups, L. D.

    1992-01-01

    The Open System Architecture is an approach to meeting the needs for flexibility and evolution of the U.S. Space Exploration Initiative program of the manned exploration of the solar system and its permanent settlement. This paper investigates the issues that future activities of the planet exploration program must confront, defines the basic concepts that provide the basis for establishing an Open System Architecture, identifies the appropriate features of such an architecture, and discusses examples of Open System Architectures.

  13. Marshall Application Realignment System (MARS) Architecture

    Belshe, Andrea; Sutton, Mandy

    2010-01-01

    The Marshall Application Realignment System (MARS) Architecture project was established to meet the certification requirements of the Department of Defense Architecture Framework (DoDAF) V2.0 Federal Enterprise Architecture Certification (FEAC) Institute program and to provide added value to the Marshall Space Flight Center (MSFC) Application Portfolio Management process. The MARS Architecture aims to: (1) address the NASA MSFC Chief Information Officer (CIO) strategic initiative to improve Application Portfolio Management (APM) by optimizing investments and improving portfolio performance, and (2) develop a decision-aiding capability by which applications registered within the MSFC application portfolio can be analyzed and considered for retirement or decommission. The MARS Architecture describes a to-be target capability that supports application portfolio analysis against scoring measures (based on value) and overall portfolio performance objectives (based on enterprise needs and policies). This scoring and decision-aiding capability supports the process by which MSFC application investments are realigned or retired from the application portfolio. The MARS Architecture is a multi-phase effort to: (1) conduct strategic architecture planning and knowledge development based on the DoDAF V2.0 six-step methodology, (2) describe one architecture through multiple viewpoints, (3) conduct portfolio analyses based on a defined operational concept, and (4) enable a new capability to support the MSFC enterprise IT management mission, vision, and goals. This report documents Phase 1 (Strategy and Design), which includes discovery, planning, and development of initial architecture viewpoints. Phase 2 will move forward the process of building the architecture, widening the scope to include application realignment (in addition to application retirement), and validating the underlying architecture logic before moving into Phase 3. The MARS Architecture key stakeholders are most

  14. Integrated Optical Interconnect Architectures for Embedded Systems

    Nicolescu, Gabriela

    2013-01-01

    This book provides a broad overview of current research in optical interconnect technologies and architectures. Introductory chapters on high-performance computing and the associated issues in conventional interconnect architectures, and on the fundamental building blocks for integrated optical interconnect, provide the foundations for the bulk of the book which brings together leading experts in the field of optical interconnect architectures for data communication. Particular emphasis is given to the ways in which the photonic components are assembled into architectures to address the needs of data-intensive on-chip communication, and to the performance evaluation of such architectures for specific applications.   Provides state-of-the-art research on the use of optical interconnects in Embedded Systems; Begins with coverage of the basics for high-performance computing and optical interconnect; Includes a variety of on-chip optical communication topologies; Features coverage of system integration and opti...

  15. Containment Domains: A Scalable, Efficient and Flexible Resilience Scheme for Exascale Systems

    Jinsuk Chung

    2013-01-01

    Full Text Available This paper describes and evaluates a scalable and efficient resilience scheme based on the concept of containment domains. Containment domains are a programming construct that enable applications to express resilience needs and to interact with the system to tune and specialize error detection, state preservation and restoration, and recovery schemes. Containment domains have weak transactional semantics and are nested to take advantage of the machine and application hierarchies and to enable hierarchical state preservation, restoration and recovery. We evaluate the scalability and efficiency of containment domains using generalized trace-driven simulation and analytical analysis and show that containment domains are superior to both checkpoint restart and redundant execution approaches.

  16. Intelligent Transportation Systems statewide architecture : final report.

    2003-06-01

    This report describes the development of Kentuckys Statewide Intelligent Transportation Systems (ITS) Architecture. The process began with the development of an ITS Strategic Plan in 1997-2000. A Business Plan, developed in 2000-2001, translated t...

  17. Reflective Self-Regenerative Systems Architecture Study

    Pu, Carlton; Blough, Douglas

    2006-01-01

    In this study, we develop the Reflective Self-Regenerative Systems (RSRS) architecture in detail, describing the internal structure of each component and the mutual invocations among the components...

  18. An Architecture for Proof Planning Systems

    Dennis, Louise Abigail

    2005-01-01

    This paper presents a generic architecture for proof planning systems in terms of an interaction between a customisable proof module and search module. These refer to both global and local information contained in reasoning states.

  19. Fault tolerant architecture for artificial olfactory system

    Lotfivand, Nasser; Hamidon, Mohd Nizar; Abdolzadeh, Vida

    2015-01-01

    In this paper, to cover and mask the faults that occur in the sensing unit of an artificial olfactory system, a novel architecture is offered. The proposed architecture is able to tolerate failures in the sensors of the array and the faults that occur are masked. The proposed architecture for extracting the correct results from the output of the sensors can provide the quality of service for generated data from the sensor array. The results of various evaluations and analysis proved that the proposed architecture has acceptable performance in comparison with the classic form of the sensor array in gas identification. According to the results, achieving a high odor discrimination based on the suggested architecture is possible. (paper)

  20. Scalable Multifunction RF Systems: Combined vs. Separate Transmit and Receive Arrays

    Huizing, A.G.

    2008-01-01

    A scalable multifunction RF (SMRF) system allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. This paper presents the results of a trade-off study with respect to

  1. Wideband vs. Multiband Trade-offs for a Scalable Multifunction RF system

    Huizing, A.G.

    2005-01-01

    This paper presents a concept for a scalable multifunction RF (SMRF) system that allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. A trade-off analysis is

  2. Algorithms, architectures and information systems security

    Sur-Kolay, Susmita; Nandy, Subhas C; Bagchi, Aditya

    2008-01-01

    This volume contains articles written by leading researchers in the fields of algorithms, architectures, and information systems security. The first five chapters address several challenging geometric problems and related algorithms. These topics have major applications in pattern recognition, image analysis, digital geometry, surface reconstruction, computer vision and in robotics. The next five chapters focus on various optimization issues in VLSI design and test architectures, and in wireless networks. The last six chapters comprise scholarly articles on information systems security coverin

  3. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  4. Advanced connection systems for architectural glazing

    Afghani Khoraskani, Roham

    2015-01-01

    This book presents the findings of a detailed study to explore the behavior of architectural glazing systems during and after an earthquake and to develop design proposals that will mitigate or even eliminate the damage inflicted on these systems. The seismic behavior of common types of architectural glazing systems are investigated and causes of damage to each system, identified. Furthermore, depending on the geometrical and structural characteristics, the ultimate horizontal load capacity of glass curtain wall systems is defined based on the stability of the glass components. Detailed attention is devoted to the incorporation of advanced connection devices between the structure of the building and the building envelope system in order to minimize the damage to glazed components. An innovative new connection device is introduced that results in a delicate and functional system easily incorporated into different architectural glazing systems, including those demanding maximum transparency.

  5. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  6. Scalable Security and Accounting Services for Content-Based Publish/Subscribe Systems

    Himanshu Khurana; Radostina K. Koleva

    2006-01-01

    Content-based publish/subscribe systems offer an interaction scheme that is appropriate for a variety of large-scale dynamic applications. However, widespread use of these systems is hindered by a lack of suitable security services. In this paper, we present scalable solutions for confidentiality, integrity, and authentication for these systems. We also provide verifiable usage-based accounting services, which are required for e-commerce and e-business applications that use publish/subscribe ...

  7. The BWS Open Business Enterprise System Architecture

    Cristian IONITA

    2011-01-01

    Full Text Available Business process management systems play a central role in supporting the business operations of medium and large organizations. This paper analyses the properties current business enterprise systems and proposes a new application type called Open Business Enterprise System. A new open system architecture called Business Workflow System is proposed. This architecture combines the instruments for flexible data management, business process management and integration into a flexible system able to manage modern business operations. The architecture was validated by implementing it into the DocuMentor platform used by major companies in Romania and US. These implementations offered the necessary data to create and refine an enterprise integration methodology called DMCPI. The final section of the paper presents the concepts, stages and techniques employed by the methodology.

  8. The architecture of enterprise hospital information system.

    Lu, Xudong; Duan, Huilong; Li, Haomin; Zhao, Chenhui; An, Jiye

    2005-01-01

    Because of the complexity of the hospital environment, there exist a lot of medical information systems from different vendors with incompatible structures. In order to establish an enterprise hospital information system, the integration among these heterogeneous systems must be considered. Complete integration should cover three aspects: data integration, function integration and workflow integration. However most of the previous design of architecture did not accomplish such a complete integration. This article offers an architecture design of the enterprise hospital information system based on the concept of digital neural network system in hospital. It covers all three aspects of integration, and eventually achieves the target of one virtual data center with Enterprise Viewer for users of different roles. The initial implementation of the architecture in the 5-year Digital Hospital Project in Huzhou Central hospital of Zhejiang Province is also described.

  9. Health System Transformation through a Scalable, Actionable Innovation Strategy.

    Snowdon, Anne

    2017-01-01

    The authors who contributed to this issue of Healthcare Papers have provided rich insights into a promising innovation agenda to support transformational change aimed at achieving high-performing, person-centric health systems that are sustainable and deliver value. First and foremost, the commentaries make clear that a focused innovation agenda with defined goals, objectives and milestones is needed, if innovation is to be a viable and successful strategy to achieve health system transformation. To date, innovation has been a catch-all term for solving the many challenges health systems are experiencing. Yet, innovation on its own cannot fix all the ills of a health system; strategic goals and objectives are needed to define the way forward if innovation is to achieve value for Canadians. To this end, the authors identify goals and objectives that are worthy of serious consideration by all health system stakeholders.

  10. SDOE 650: System Architecture and Design

    George, Colin B [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2014-07-01

    The proposed system is a test system that verifies the cables functionality in the expected environments defined in the ES. Verification methods include test, inspect, demonstrate, and analyze. Since we are defining the architecture for a test system we will focus on the customer expectations and requirements that will be satisfied or verified via testing

  11. Electrical system architecture having high voltage bus

    Hoff, Brian Douglas [East Peoria, IL; Akasam, Sivaprasad [Peoria, IL

    2011-03-22

    An electrical system architecture is disclosed. The architecture has a power source configured to generate a first power, and a first bus configured to receive the first power from the power source. The architecture also has a converter configured to receive the first power from the first bus and convert the first power to a second power, wherein a voltage of the second power is greater than a voltage of the first power, and a second bus configured to receive the second power from the converter. The architecture further has a power storage device configured to receive the second power from the second bus and deliver the second power to the second bus, a propulsion motor configured to receive the second power from the second bus, and an accessory motor configured to receive the second power from the second bus.

  12. Harvest: A Scalable, Customizable Discovery and Access System

    Bowman, C. M; Danzig, Peter B; Hardy, Darren R; Manber, Udi; Schwartz, Michael F

    1994-01-01

    .... In this paper we introduce Harvest, a system that provides a set of customizable tools for gathering information from diverse repositories, building topic-specific content indexes, flexibly searching...

  13. Robust, Highly Scalable Solar Array System, Phase I

    National Aeronautics and Space Administration — Solar array systems currently under development are focused on near-term missions with designs optimized for the 30-50 kW power range. However, NASA has a vital...

  14. Architecture Of High Speed Image Processing System

    Konishi, Toshio; Hayashi, Hiroshi; Ohki, Tohru

    1988-01-01

    One of architectures for a high speed image processing system which corresponds to a new algorithm for a shape understanding is proposed. And the hardware system which is based on the archtecture was developed. Consideration points of the architecture are mainly that using processors should match with the processing sequence of the target image and that the developed system should be used practically in an industry. As the result, it was possible to perform each processing at a speed of 80 nano-seconds a pixel.

  15. A Scalable Superconductor Bearing System For Lunar Telescopes And Instruments

    Chen, Peter C.; Rabin, D.; Van Steenberg, M. E.

    2010-01-01

    We report on a new concept for a telescope mount on the Moon based on high temperature superconductors (HTS). Lunar nights are long (15 days), and temperatures range from 100 K to 30 K inside shadowed craters. Telescopes on the Moon therefore require bearing systems that can position and track precisely under cryogenic conditions, over long time periods, preferably with no maintenance, and preferably do not fail with loss of power. HTS bearings, consisting of permanent magnets levitated over bulk superconductors, are well suited to the task. The components do not make physical contact, hence there is no wear. The levitation is passive and stable; no power is required to maintain position. We report on the design and laboratory demonstration of a prototype two-axis pointing system. Unlike previous designs, this new configuration is simple and easy to implement. Most importantly, it can be scaled to accommodate instruments ranging in size from decimeters (laser communication systems) to meters (solar panels, communication dishes, optical telescopes, optical interferometers) to decameters and beyond (VLA-type radio interferometer elements).

  16. An Architecture for Open Learning Management Systems

    Avgeriou, Paris; Retalis, Simos; Skordalakis, Manolis

    2003-01-01

    There exists an urgent demand on defining architectures for Learning Management Systems, so that high-level frameworks for understanding these systems can be discovered, and quality attributes like portability, interoperability, reusability and modifiability can be achieved. In this paper we propose

  17. Communication architecture of an early warning system

    M. Angermann

    2010-11-01

    Full Text Available This article discusses aspects of communication architecture for early warning systems (EWS in general and gives details of the specific communication architecture of an early warning system against tsunamis. While its sensors are the "eyes and ears" of a warning system and enable the system to sense physical effects, its communication links and terminals are its "nerves and mouth" which transport measurements and estimates within the system and eventually warnings towards the affected population. Designing the communication architecture of an EWS against tsunamis is particularly challenging. Its sensors are typically very heterogeneous and spread several thousand kilometers apart. They are often located in remote areas and belong to different organizations. Similarly, the geographic spread of the potentially affected population is wide. Moreover, a failure to deliver a warning has fatal consequences. Yet, the communication infrastructure is likely to be affected by the disaster itself. Based on an analysis of the criticality, vulnerability and availability of communication means, we describe the design and implementation of a communication system that employs both terrestrial and satellite communication links. We believe that many of the issues we encountered during our work in the GITEWS project (German Indonesian Tsunami Early Warning System, Rudloff et al., 2009 on the design and implementation communication architecture are also relevant for other types of warning systems. With this article, we intend to share our insights and lessons learned.

  18. Scalable DDoS Mitigation System for Data Centers

    Zdenek Martinasek

    2015-01-01

    Full Text Available Distributed Denial of Service attacks (DDoS have been used by attackers for over two decades because of their effectiveness. This type of the cyber-attack is one of the most destructive attacks in the Internet. In recent years, the intensity of DDoS attacks has been rapidly increasing and the attackers combine more often different techniques of DDoS to bypass the protection. Therefore, the main goal of our research is to propose a DDoS solution that allows to increase the filtering capacity linearly and allows to protect against the combination of attacks. The main idea is to develop the DDoS defense system in the form of a portable software image that can be installed on the reserve hardware capacities. During a DDoS attack, these servers will be used as filters of this DDoS attack. Our solution is suitable for data centers and eliminates some lacks of commercial solutions. The system employs modular DDoS filters in the form of special grids containing specific protocol parameters and conditions.

  19. Storage system architectures and their characteristics

    Sarandrea, Bryan M.

    1993-01-01

    Not all users storage requirements call for 20 MBS data transfer rates, multi-tier file or data migration schemes, or even automated retrieval of data. The number of available storage solutions reflects the broad range of user requirements. It is foolish to think that any one solution can address the complete range of requirements. For users with simple off-line storage requirements, the cost and complexity of high end solutions would provide no advantage over a more simple solution. The correct answer is to match the requirements of a particular storage need to the various attributes of the available solutions. The goal of this paper is to introduce basic concepts of archiving and storage management in combination with the most common architectures and to provide some insight into how these concepts and architectures address various storage problems. The intent is to provide potential consumers of storage technology with a framework within which to begin the hunt for a solution which meets their particular needs. This paper is not intended to be an exhaustive study or to address all possible solutions or new technologies, but is intended to be a more practical treatment of todays storage system alternatives. Since most commercial storage systems today are built on Open Systems concepts, the majority of these solutions are hosted on the UNIX operating system. For this reason, some of the architectural issues discussed focus around specific UNIX architectural concepts. However, most of the architectures are operating system independent and the conclusions are applicable to such architectures on any operating system.

  20. Scalable Manufacturing of Solderable and Stretchable Physiologic Sensing Systems.

    Kim, Yun-Soung; Lu, Jesse; Shih, Benjamin; Gharibans, Armen; Zou, Zhanan; Matsuno, Kristen; Aguilera, Roman; Han, Yoonjae; Meek, Ann; Xiao, Jianliang; Tolley, Michael T; Coleman, Todd P

    2017-10-01

    Methods for microfabrication of solderable and stretchable sensing systems (S4s) and a scaled production of adhesive-integrated active S4s for health monitoring are presented. S4s' excellent solderability is achieved by the sputter-deposited nickel-vanadium and gold pad metal layers and copper interconnection. The donor substrate, which is modified with "PI islands" to become selectively adhesive for the S4s, allows the heterogeneous devices to be integrated with large-area adhesives for packaging. The feasibility for S4-based health monitoring is demonstrated by developing an S4 integrated with a strain gauge and an onboard optical indication circuit. Owing to S4s' compatibility with the standard printed circuit board assembly processes, a variety of commercially available surface mount chip components, such as the wafer level chip scale packages, chip resistors, and light-emitting diodes, can be reflow-soldered onto S4s without modifications, demonstrating the versatile and modular nature of S4s. Tegaderm-integrated S4 respiration sensors are tested for robustness for cyclic deformation, maximum stretchability, durability, and biocompatibility for multiday wear time. The results of the tests and demonstration of the respiration sensing indicate that the adhesive-integrated S4s can provide end users a way for unobtrusive health monitoring. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Using Runtime Systems Tools to Implement Efficient Preconditioners for Heterogeneous Architectures

    Roussel Adrien

    2016-11-01

    Full Text Available Solving large sparse linear systems is a time-consuming step in basin modeling or reservoir simulation. The choice of a robust preconditioner strongly impact the performance of the overall simulation. Heterogeneous architectures based on General Purpose computing on Graphic Processing Units (GPGPU or many-core architectures introduce programming challenges which can be managed in a transparent way for developer with the use of runtime systems. Nevertheless, algorithms need to be well suited for these massively parallel architectures. In this paper, we present preconditioning techniques which enable to take advantage of emerging architectures. We also present our task-based implementations through the use of the HARTS (Heterogeneous Abstract RunTime System runtime system, which aims to manage the recent architectures. We focus on two preconditoners. The first is ILU(0 preconditioner implemented on distributing memory systems. The second one is a multi-level domain decomposition method implemented on a shared-memory system. Obtained results are then presented on corresponding architectures, which open the way to discuss on the scalability of such methods according to numerical performances while keeping in mind that the next step is to propose a massively parallel implementations of these techniques.

  2. The engineering of a scalable multi-site communications system utilizing quantum key distribution (QKD)

    Tysowski, Piotr K.; Ling, Xinhua; Lütkenhaus, Norbert; Mosca, Michele

    2018-04-01

    Quantum key distribution (QKD) is a means of generating keys between a pair of computing hosts that is theoretically secure against cryptanalysis, even by a quantum computer. Although there is much active research into improving the QKD technology itself, there is still significant work to be done to apply engineering methodology and determine how it can be practically built to scale within an enterprise IT environment. Significant challenges exist in building a practical key management service (KMS) for use in a metropolitan network. QKD is generally a point-to-point technique only and is subject to steep performance constraints. The integration of QKD into enterprise-level computing has been researched, to enable quantum-safe communication. A novel method for constructing a KMS is presented that allows arbitrary computing hosts on one site to establish multiple secure communication sessions with the hosts of another site. A key exchange protocol is proposed where symmetric private keys are granted to hosts while satisfying the scalability needs of an enterprise population of users. The KMS operates within a layered architectural style that is able to interoperate with various underlying QKD implementations. Variable levels of security for the host population are enforced through a policy engine. A network layer provides key generation across a network of nodes connected by quantum links. Scheduling and routing functionality allows quantum key material to be relayed across trusted nodes. Optimizations are performed to match the real-time host demand for key material with the capacity afforded by the infrastructure. The result is a flexible and scalable architecture that is suitable for enterprise use and independent of any specific QKD technology.

  3. BWS Open System Architecture Security Assessment

    Cristian Ionita

    2011-01-01

    Business process management systems play a central role in supporting the business operations of medium and large organizations. Because of this the security characteristics of these systems are becoming very important. The present paper describes the BWS architecture used to implement the open process aware information system DocuMentor. Using the proposed platform, the article identifies the security characteristics of such systems, shows the correlation between these characteristics and th...

  4. Architecture, systems research and computational sciences

    2012-01-01

    The Winter 2012 (vol. 14 no. 1) issue of the Nexus Network Journal is dedicated to the theme “Architecture, Systems Research and Computational Sciences”. This is an outgrowth of the session by the same name which took place during the eighth international, interdisciplinary conference “Nexus 2010: Relationships between Architecture and Mathematics, held in Porto, Portugal, in June 2010. Today computer science is an integral part of even strictly historical investigations, such as those concerning the construction of vaults, where the computer is used to survey the existing building, analyse the data and draw the ideal solution. What the papers in this issue make especially evident is that information technology has had an impact at a much deeper level as well: architecture itself can now be considered as a manifestation of information and as a complex system. The issue is completed with other research papers, conference reports and book reviews.

  5. INTEGRATED INFORMATION SYSTEM ARCHITECTURE PROVIDING BEHAVIORAL FEATURE

    Vladimir N. Shvedenko

    2016-11-01

    Full Text Available The paper deals with creation of integrated information system architecture capable of supporting management decisions using behavioral features. The paper considers the architecture of information decision support system for production system management. The behavioral feature is given to an information system, and it ensures extraction, processing of information, management decision-making with both automated and automatic modes of decision-making subsystem being permitted. Practical implementation of information system with behavior is based on service-oriented architecture: there is a set of independent services in the information system that provides data of its subsystems or data processing by separate application under the chosen variant of the problematic situation settlement. For creation of integrated information system with behavior we propose architecture including the following subsystems: data bus, subsystem for interaction with the integrated applications based on metadata, business process management subsystem, subsystem for the current state analysis of the enterprise and management decision-making, behavior training subsystem. For each problematic situation a separate logical layer service is created in Unified Service Bus handling problematic situations. This architecture reduces system information complexity due to the fact that with a constant amount of system elements the number of links decreases, since each layer provides communication center of responsibility for the resource with the services of corresponding applications. If a similar problematic situation occurs, its resolution is automatically removed from problem situation metamodel repository and business process metamodel of its settlement. In the business process performance commands are generated to the corresponding centers of responsibility to settle a problematic situation.

  6. Proactive identification of scalable program architectures: How to achieve a quantum-leap in time-to-market

    Hansen, Christian Lindschou; Mortensen, Niels Henrik

    2014-01-01

    a structured process. The framework enables companies to identify a program architecture as the basis for improving time-to-market and R&D efficiency for products derived from the architecture. Case studies show that significant reductions of development lead time up to 50% is possible. Significance: Many...... of a product development project. The framework consists of three basic aspects: the market, product program, production and a time aspect-captured in the multi-level roadmap. One of the unique features is that these aspects are linked, allowing for an early clarification of critical issues through...

  7. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  8. 2. E-Commerce System Architecture

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 11. Electronic Commerce - E-Commerce System Architecture. V Rajaraman. Series Article Volume 5 Issue 11 November 2000 pp 26-36. Fulltext. Click here to view fulltext PDF. Permanent link:

  9. Real-time systems architectures

    Sendall, D.M.

    1986-01-01

    The aim of this paper is to explore some of the design issues in online data acquisition and monitoring systems for high-energy physics experiments. In particular it concentrates on the multi-processor aspects of the design of existing and planned experiments. The central problem to be solved by these systems is the readout and checking of the apparatus, and the recording and perhaps some processing of the data. (Auth.)

  10. An energy-efficient architecture for internet of things systems

    De Rango, Floriano; Barletta, Domenico; Imbrogno, Alessandro

    2016-05-01

    In this paper some of the motivations for energy-efficient communications in wireless systems are described by highlighting emerging trends and identifying some challenges that need to be addressed to enable novel, scalable and energy-efficient communications. So an architecture for Internet of Things systems is presented, the purpose of which is to minimize energy consumption by communication devices, protocols, networks, end-user systems and data centers. Some electrical devices have been designed with multiple communication interfaces, such as RF or WiFi, using open source technology; they have been analyzed under different working conditions. Some devices are programmed to communicate directly with a web server, others to communicate only with a special device that acts as a bridge between some devices and the web server. Communication parameters and device status have been changed dynamically according to different scenarios in order to have the most benefits in terms of energy cost and battery lifetime. So the way devices communicate with the web server or between each other and the way they try to obtain the information they need to be always up to date change dynamically in order to guarantee always the lowest energy consumption, a long lasting battery lifetime, the fastest responses and feedbacks and the best quality of service and communication for end users and inner devices of the system.

  11. An Extensible, Ontology-based, Distributed Information System Architecture

    Chao, Alan I; Krikeles, Basil C; Lusignan, Angela E; Starczewski, Edward

    2003-01-01

    ...), which facilitates the construction of scalable, flexible distributed systems. XDA is based on a simple ontology mechanism that enables the definition and maintenance of high-level object models to capture the shared semantics necessary for interoperability...

  12. Platform Architecture for Decentralized Positioning Systems

    Zakaria Kasmi

    2017-04-01

    Full Text Available A platform architecture for positioning systems is essential for the realization of a flexible localization system, which interacts with other systems and supports various positioning technologies and algorithms. The decentralized processing of a position enables pushing the application-level knowledge into a mobile station and avoids the communication with a central unit such as a server or a base station. In addition, the calculation of the position on low-cost and resource-constrained devices presents a challenge due to the limited computing, storage capacity, as well as power supply. Therefore, we propose a platform architecture that enables the design of a system with the reusability of the components, extensibility (e.g., with other positioning technologies and interoperability. Furthermore, the position is computed on a low-cost device such as a microcontroller, which simultaneously performs additional tasks such as data collecting or preprocessing based on an operating system. The platform architecture is designed, implemented and evaluated on the basis of two positioning systems: a field strength system and a time of arrival-based positioning system.

  13. Design and thermal performances of a scalable linear Fresnel reflector solar system

    Zhu, Yanqing; Shi, Jifu; Li, Yujian; Wang, Leilei; Huang, Qizhang; Xu, Gang

    2017-01-01

    Highlights: • A scalable linear Fresnel reflector which can supply different temperatures is proposed. • Inclination design of the mechanical structure is used to reduce the end losses. • The maximum thermal efficiency of 64% is achieved in Guangzhou. - Abstract: This paper proposes a scalable linear Fresnel reflector (SLFR) solar system. The optical mirror field which contains an array of linear plat mirrors closed to each other is designed to eliminate the inter-low shading and blocking. Scalable mechanical mirror support which can place different number of mirrors is designed to supply different temperatures. The mechanical structure can be inclined to reduce the end losses. Finally, the thermal efficiency of the SLFR with two stage mirrors is tested. After adjustment, the maximum thermal efficiency of 64% is obtained and the mean thermal efficiency is higher than that before adjustment. The results indicate that the end losses have been reduced effectively by the inclination design and excellent thermal performance can be obtained by the SLFR after adjustment.

  14. Heat-treated stainless steel felt as scalable anode material for bioelectrochemical systems.

    Guo, Kun; Soeriyadi, Alexander H; Feng, Huajun; Prévoteau, Antonin; Patil, Sunil A; Gooding, J Justin; Rabaey, Korneel

    2015-11-01

    This work reports a simple and scalable method to convert stainless steel (SS) felt into an effective anode for bioelectrochemical systems (BESs) by means of heat treatment. X-ray photoelectron spectroscopy and cyclic voltammetry elucidated that the heat treatment generated an iron oxide rich layer on the SS felt surface. The iron oxide layer dramatically enhanced the electroactive biofilm formation on SS felt surface in BESs. Consequently, the sustained current densities achieved on the treated electrodes (1 cm(2)) were around 1.5±0.13 mA/cm(2), which was seven times higher than the untreated electrodes (0.22±0.04 mA/cm(2)). To test the scalability of this material, the heat-treated SS felt was scaled up to 150 cm(2) and similar current density (1.5 mA/cm(2)) was achieved on the larger electrode. The low cost, straightforwardness of the treatment, high conductivity and high bioelectrocatalytic performance make heat-treated SS felt a scalable anodic material for BESs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Telemedicine system interoperability architecture: concept description and architecture overview.

    Craft, Richard Layne, II

    2004-05-01

    In order for telemedicine to realize the vision of anywhere, anytime access to care, it must address the question of how to create a fully interoperable infrastructure. This paper describes the reasons for pursuing interoperability, outlines operational requirements that any interoperability approach needs to consider, proposes an abstract architecture for meeting these needs, identifies candidate technologies that might be used for rendering this architecture, and suggests a path forward that the telemedicine community might follow.

  16. A facile and scalable strategy for synthesis of size-tunable NiCo2O4 with nanocoral-like architecture for high-performance supercapacitors

    Tao, Yan; Ruiyi, Li; Zaijun, Li; Yinjun, Fang

    2014-01-01

    Graphical abstract: We reported a facile and scalable strategy for synthesis of size-tunable NiCo 2 O 4 with nanocoral-like architecture. The unique structure will improve faradaic redox reaction and mass transfer, NiCo 2 O 4 offers excellent electrochemical performance for supercapacitors. - Highlights: • We reported a facile and scalable strategy for synthesis of size-tunable NiCo 2 O 4 withnanocoral-lide architecture. • Combination of microwave and tertbutanol as medium creates ultrathin nickel/cobalt double hydroxide with flowerclusters. • The method is very simple, rapid and efficient, it can be used for large scale productionof nanomaterials. • The size of NiCo 2 O 4 nanocorals is easy to be can be controlled by adjusting calcination temperature. • Unique structure enhances rates of electron transfer and mass transport, NiCo 2 O 4 shows high electrochemical performance. - Abstract: There is a great need to develop high-performance electroactive materials for supercapacitors. The study reported a facile and scalable strategy for synthesis of size-tunable NiCo 2 O 4 with nanocoral-like architecture. Cobalt nitrate and nickel nitrate were dissolved in a tertbutanol solution and heated to reflux state under microwave radiation. The amounts of ammonia was dropped into the mixed solution to form nickel/cobalt double hydroxides. The reaction can complete within 15 min with the productivity of 99.9%. The obtained double hydroxides display flowercluster-like ultrathin nanostructure. The double hydroxide was calcined into different NiCo 2 O 4 products using different calcination temperature, including 400 °C, 500 °C, 600 °C and 700 °C. The resulting NiCo 2 O 4 is of nanocoral-like architecture. Interestingly, the size of coral can be easily controlled by adjusting the temperature. The NiCo 2 O 4 prepared at 400°C gives a minimum building block size (10.2 nm) and maximum specific surface area (108.8 m 2 ·g −1 ). The unique structure will greatly

  17. The architecture of a modern military health information system.

    Mukherji, Raj J; Egyhazy, Csaba J

    2004-06-01

    This article describes a melding of a government-sponsored architecture for complex systems with open systems engineering architecture developed by the Institute for Electrical and Electronics Engineers (IEEE). Our experience in using these two architectures in building a complex healthcare system is described in this paper. The work described shows that it is possible to combine these two architectural frameworks in describing the systems, operational, and technical views of a complex automation system. The advantage in combining the two architectural frameworks lies in the simplicity of implementation and ease of understanding of automation system architectural elements by medical professionals.

  18. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    Toor, S; Eerola, P; Kraemer, O; Lindén, T; Osmani, L; Tarkoma, S; White, J

    2014-01-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  19. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  20. Battery-Less Electroencephalogram System Architecture Optimization

    2016-12-01

    self-powered, adaptive data acquisition, subthreshold, internet of things 34 Peter Gadfort 301-394-0949Unclassified Unclassified Unclassified UU ii...desirable, such as for Internet of Things systems. The presented architecture is capable of low- power operation while maintaining a similar signal...the system will need to be harvested from the environment. There are several methods to harvest power from RF, solar , motion, and thermal. In this case

  1. An Architecture for Information Commerce Systems

    Hauswirth, Manfred; Jazayeri, Mehdi; Miklós, Zoltan; Podnar, Ivana; Di Nitto, Elisabetta; Wombacher, Andreas

    2001-01-01

    The increasing use of the Internet in business and commerce has created a number of new business opportunities and the need for supporting models and platforms. One of these opportunities is information commerce (i-commerce), a special case of ecommerce focused on the purchase and sale of information as a commodity. In this paper we present an architecture for i-commerce systems using OPELIX (Open Personalized Electronic Information Commerce System) [11] as an example. OPELIX provides an open...

  2. The NOAA Satellite Observing System Architecture Study

    Volz, Stephen; Maier, Mark; Di Pietro, David

    2016-01-01

    NOAA is beginning a study, the NOAA Satellite Observing System Architecture (NSOSA) study, to plan for the future operational environmental satellite system that will follow GOES and JPSS, beginning about 2030. This is an opportunity to design a modern architecture with no pre-conceived notions regarding instruments, platforms, orbits, etc. The NSOSA study will develop and evaluate architecture alternatives to include partner and commercial alternatives that are likely to become available. The objectives will include both functional needs and strategic characteristics (e.g., flexibility, responsiveness, sustainability). Part of this study is the Space Platform Requirements Working Group (SPRWG), which is being commissioned by NESDIS. The SPRWG is charged to assess new or existing user needs and to provide relative priorities for observational needs in the context of the future architecture. SPRWG results will serve as input to the process for new foundational (Level 0 and Level 1) requirements for the next generation of NOAA satellites that follow the GOES-R, JPSS, DSCOVR, Jason-3, and COSMIC-2 missions.

  3. Systemic Approach to Architectural Performance

    Marie Davidova

    2017-04-01

    Full Text Available First-hand experiences in several design projects that were based on media richness and collaboration are described in this article. Although complex design processes are merely considered as socio-technical systems, they are deeply involved with natural systems. My collaborative research in the field of performance-oriented design combines digital and physical conceptual sketches, simulations and prototyping. GIGA-mapping - is applied to organise the data. The design process uses the most suitable tools, for the subtasks at hand, and the use of media is mixed according to particular requirements. These tools include digital and physical GIGA-mapping, parametric computer aided design (CAD, digital simulation of analyses, as well as sampling and 1:1 prototyping. Also discussed in this article are the methodologies used in several design projects to strategize these tools and the developments and trends in the tools employed.  The paper argues that the digital tools tend to produce similar results through given pre-sets that often do not correspond to real needs. Thus, there is a significant need for mixed methods including prototyping in the creative design process. Media mixing and cooperation across disciplines is unavoidable in the holistic approach to contemporary design. This includes the consideration of diverse biotic and abiotic agents. I argue that physical and digital GIGA-mapping is a crucial tool to use in coping with this complexity. Furthermore, I propose the integration of physical and digital outputs in one GIGA-map and the participation and co-design of biotic and abiotic agents into one rich design research space, which is resulting in an ever-evolving research-design process-result time-based design.

  4. Deep Space Network information system architecture study

    Beswick, C. A.; Markley, R. W. (Editor); Atkinson, D. J.; Cooper, L. P.; Tausworthe, R. C.; Masline, R. C.; Jenkins, J. S.; Crowe, R. A.; Thomas, J. L.; Stoloff, M. J.

    1992-01-01

    The purpose of this article is to describe an architecture for the DSN information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990's. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies--i.e., computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control.

  5. A new architecture for enterprise information systems.

    Covvey, H D; Stumpf, J J

    1999-01-01

    Irresistible economic and technical forces are forcing healthcare institutions to develop regionalized services such as consolidated or virtual laboratories. Technical realities, such as the lack of an enabling enterprise-level information technology (IT) integration infrastructure, the existence of legacy systems, and non-existent or embryonic enterprise-level IT services organizations, are delaying or frustrating the achievement of the desired configuration of shared services. On attempting to address this matter, we discover that the state-of-the-art in integration technology is not wholly adequate, and itself becomes a barrier to the full realization of shared healthcare services. In this paper we report new work from the field of Co-operative Information Systems that proposes a new architecture of systems that are intrinsically cooperation-enabled, and we extend this architecture to both the regional and national scales.

  6. Compact, open-architecture computed radiography system

    Huang, H.K.; Lim, A.; Kangarloo, H.; Eldredge, S.; Loloyan, M.; Chuang, K.S.

    1990-01-01

    Computed radiography (CR) was introduced in 1982, and its basic system design has not changed. Current CR systems have certain limitations: spatial resolution and signal-to-noise ratios are lower than those of screen-film systems, they are complicated and expensive to build, and they have a closed architecture. The authors of this paper designed and implemented a simpler, lower-cost, compact, open-architecture CR system to overcome some of these limitations. The open-architecture system is a manual-load-single-plate reader that can fit on a desk top. Phosphor images are stored in a local disk and can be sent to any other computer through standard interfaces. Any manufacturer's plate can be read with a scanning time of 90 second for a 35 x 43-cm plate. The standard pixel size is 174 μm and can be adjusted for higher spatial resolution. The data resolution is 12 bits/pixel over an x-ray exposure range of 0.01-100 mR

  7. System Hardening Architecture for Safer Access to Critical Business ...

    System Hardening Architecture for Safer Access to Critical Business Data. ... and the threat is growing faster than the potential victims can deal with. ... in this architecture are applied to the host, application, operating system, user, and the ...

  8. A resource management architecture for metacomputing systems.

    Czajkowski, K.; Foster, I.; Karonis, N.; Kesselman, C.; Martin, S.; Smith, W.; Tuecke, S.

    1999-08-24

    Metacomputing systems are intended to support remote and/or concurrent use of geographically distributed computational resources. Resource management in such systems is complicated by five concerns that do not typically arise in other situations: site autonomy and heterogeneous substrates at the resources, and application requirements for policy extensibility, co-allocation, and online control. We describe a resource management architecture that addresses these concerns. This architecture distributes the resource management problem among distinct local manager, resource broker, and resource co-allocator components and defines an extensible resource specification language to exchange information about requirements. We describe how these techniques have been implemented in the context of the Globus metacomputing toolkit and used to implement a variety of different resource management strategies. We report on our experiences applying our techniques in a large testbed, GUSTO, incorporating 15 sites, 330 computers, and 3600 processors.

  9. Functional Interface Considerations within an Exploration Life Support System Architecture

    Perry, Jay L.; Sargusingh, Miriam J.; Toomarian, Nikzad

    2016-01-01

    As notional life support system (LSS) architectures are developed and evaluated, myriad options must be considered pertaining to process technologies, components, and equipment assemblies. Each option must be evaluated relative to its impact on key functional interfaces within the LSS architecture. A leading notional architecture has been developed to guide the path toward realizing future crewed space exploration goals. This architecture includes atmosphere revitalization, water recovery and management, and environmental monitoring subsystems. Guiding requirements for developing this architecture are summarized and important interfaces within the architecture are discussed. The role of environmental monitoring within the architecture is described.

  10. A scalable and practical one-pass clustering algorithm for recommender system

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  11. System architecture of a mixed reality framework

    Seibert, Helmut; Dähne, Patrick

    2006-01-01

    In this paper the software architecture of a framework which simplifies the development of applications in the area of Virtual and Augmented Reality is presented. It is based on VRML/X3D to enable rendering of audio-visual information. We extended our VRML rendering system by a device management system that is based on the concept of a data-flow graph. The aim of the system is to create Mixed Reality (MR) applications simply by plugging together small prefabricated software components, instea...

  12. New architectures for space power systems

    Ehsani, M.; Patton, A.D.; Biglic, O.

    1992-01-01

    Electric power generation and conditioning have experienced revolutionary development over the past two decades. Furthermore, new materials such as high energy magnets and high temperature superconductors are either available or on the horizon. The authors' work is based on the promise that new technologies are an important driver of new power system concepts and architectures. This observation is born out by the historical evolution of power systems both in terrestrial and aerospace applications. This paper will introduce new approaches to designing space power systems by using several new technologies

  13. The sustainable IT architecture resilient information systems

    Bonnet, P

    2009-01-01

    This book focuses on Service Oriented Architecture (SOA), the basis of sustainable and more agile IT systems that are able to adapt themselves to new trends and manage processes involving a third party. The discussion is based on the public Praxeme method and features a number of examples taken from large SOA projects which were used to rewrite the information systems of an insurance company; as such, decision-makers, creators of IT systems, programmers and computer scientists, as well as those who will use these new developments, will find this a useful resource

  14. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  15. A generic architecture for an adaptive, interoperable and intelligent type 2 diabetes mellitus care system.

    Uribe, Gustavo A; Blobel, Bernd; López, Diego M; Schulz, Stefan

    2015-01-01

    Chronic diseases such as Type 2 Diabetes Mellitus (T2DM) constitute a big burden to the global health economy. T2DM Care Management requires a multi-disciplinary and multi-organizational approach. Because of different languages and terminologies, education, experiences, skills, etc., such an approach establishes a special interoperability challenge. The solution is a flexible, scalable, business-controlled, adaptive, knowledge-based, intelligent system following a systems-oriented, architecture-centric, ontology-based and policy-driven approach. The architecture of real systems is described, using the basics and principles of the Generic Component Model (GCM). For representing the functional aspects of a system the Business Process Modeling Notation (BPMN) is used. The system architecture obtained is presented using a GCM graphical notation, class diagrams and BPMN diagrams. The architecture-centric approach considers the compositional nature of the real world system and its functionalities, guarantees coherence, and provides right inferences. The level of generality provided in this paper facilitates use case specific adaptations of the system. By that way, intelligent, adaptive and interoperable T2DM care systems can be derived from the presented model as presented in another publication.

  16. Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data

    Wanrong Huang

    2017-01-01

    Full Text Available The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS, is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new parallel hardware could provide better improvement for scientific methods. To address small-world graph problems, we propose a scalable and novel field-programmable gate array-based heterogeneous multicore system for scientific programming. The core is multithread for streaming processing. And the communication network InfiniBand is adopted for scalability. We design a binary search algorithm to address mapping to unify all processor addresses. Within the limits permitted by the Graph500 test bench after 1D parallel hybrid BFS algorithm testing, our 8-core and 8-thread-per-core system achieved superior performance and efficiency compared with the prior work under the same degree of parallelism. Our system is efficient not as a special acceleration unit but as a processor platform that deals with graph searching applications.

  17. Large computer systems and new architectures

    Bloch, T.

    1978-01-01

    The super-computers of today are becoming quite specialized and one can no longer expect to get all the state-of-the-art software and hardware facilities in one package. In order to achieve faster and faster computing it is necessary to experiment with new architectures, and the cost of developing each experimental architecture into a general-purpose computer system is too high when one considers the relatively small market for these computers. The result is that such computers are becoming 'back-ends' either to special systems (BSP, DAP) or to anything (CRAY-1). Architecturally the CRAY-1 is the most attractive today since it guarantees a speed gain of a factor of two over a CDC 7600 thus allowing us to regard any speed up resulting from vectorization as a bonus. It looks, however, as if it will be very difficult to make substantially faster computers using only pipe-lining techniques and that it will be necessary to explore multiple processors working on the same problem. The experience which will be gained with the BSP and the DAP over the next few years will certainly be most valuable in this respect. (Auth.)

  18. ARCHITECTURE AND RELIABILITY OF OPERATING SYSTEMS

    Stanislav V. Nazarov

    2018-03-01

    Full Text Available Progress in the production technology of microprocessors significantly increased reliability and performance of the computer systems hardware. It cannot be told about the corresponding characteristics of the software and its basis – the operating system (OS. Some achievements of program engineering are more modest in this field. Both directions of OS improvement (increasing of productivity and reliability are connected with the development of effective structures of these systems. OS functional complexity leads to the multiplicity of the structure, which is further enhanced by the specialization of the operating system depending on scope of computer system (complex scientific calculations, real time, information retrieval systems, systems of the automated and automatic control, etc. The functional complexity of the OS leads to the complexity of its architecture, which is further enhanced by the specialization of the operating system, depending on the computer system application area (complex scientific calculations, real-time, information retrieval systems, automated and automatic control systems, etc.. That fact led to variety of modern OS. It is possible to estimate reliability of different OS structures only as results of long-term field experiment or simulation modeling. However it is most often unacceptable because of time and funds expenses for carrying out such research. This survey attempts to evaluate the reliability of two main OS architectures: large multi-layered modular core and a multiserver (client-server system. Represented by continuous Markov chains which are explored in the stationary mode on the basis of transition from systems of the differential equations of Kolmogorov to system of the linear algebraic equations, models of these systems are developed.

  19. A Grid Architecture for Manufacturing Database System

    Laurentiu CIOVICĂ

    2011-06-01

    Full Text Available Before the Enterprise Resource Planning concepts business functions within enterprises were supported by small and isolated applications, most of them developed internally. Yet today ERP platforms are not by themselves the answer to all organizations needs especially in times of differentiated and diversified demands among end customers. ERP platforms were integrated with specialized systems for the management of clients, Customer Relationship Management and vendors, Supplier Relationship Management. They were integrated with Manufacturing Execution Systems for better planning and control of production lines. In order to offer real time, efficient answers to the management level, ERP systems were integrated with Business Intelligence systems. This paper analyses the advantages of grid computing at this level of integration, communication and interoperability between complex specialized informatics systems with a focus on the system architecture and data base systems.

  20. Renaissance architecture for Ground Data Systems

    Perkins, Dorothy C.; Zeigenfuss, Lawrence B.

    1994-01-01

    The Mission Operations and Data Systems Directorate (MO&DSD) has embarked on a new approach for developing and operating Ground Data Systems (GDS) for flight mission support. This approach is driven by the goals of minimizing cost and maximizing customer satisfaction. Achievement of these goals is realized through the use of a standard set of capabilities which can be modified to meet specific user needs. This approach, which is called the Renaissance architecture, stresses the engineering of integrated systems, based upon workstation/local area network (LAN)/fileserver technology and reusable hardware and software components called 'building blocks.' These building blocks are integrated with mission specific capabilities to build the GDS for each individual mission. The building block approach is key to the reduction of development costs and schedules. Also, the Renaissance approach allows the integration of GDS functions that were previously provided via separate multi-mission facilities. With the Renaissance architecture, the GDS can be developed by the MO&DSD or all, or part, of the GDS can be operated by the user at their facility. Flexibility in operation configuration allows both selection of a cost-effective operations approach and the capability for customizing operations to user needs. Thus the focus of the MO&DSD is shifted from operating systems that we have built to building systems and, optionally, operations as separate services. Renaissance is actually a continuous process. Both the building blocks and the system architecture will evolve as user needs and technology change. Providing GDS on a per user basis enables this continuous refinement of the development process and product and allows the MO&DSD to remain a customer-focused organization. This paper will present the activities and results of the MO&DSD initial efforts toward the establishment of the Renaissance approach for the development of GDS, with a particular focus on both the technical

  1. Baseline Architecture of ITER Control System

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  2. Scalable fractionation of iron oxide nanoparticles using a CO2 gas-expanded liquid system

    Vengsarkar, Pranav S.; Xu, Rui; Roberts, Christopher B.

    2015-01-01

    Iron oxide nanoparticles exhibit highly size-dependent physicochemical properties that are important in applications such as catalysis and environmental remediation. In order for these size-dependent properties to be effectively harnessed for industrial applications scalable and cost-effective techniques for size-controlled synthesis or size separation must be developed. The synthesis of monodisperse iron oxide nanoparticles can be a prohibitively expensive process on a large scale. An alternative involves the use of inexpensive synthesis procedures followed by a size-selective processing technique. While there are many techniques available to fractionate nanoparticles, many of the techniques are unable to efficiently fractionate iron oxide nanoparticles in a scalable and inexpensive manner. A scalable apparatus capable of fractionating large quantities of iron oxide nanoparticles into distinct fractions of different sizes and size distributions has been developed. Polydisperse iron oxide nanoparticles (2–20 nm) coated with oleic acid used in this study were synthesized using a simple and inexpensive version of the popular coprecipitation technique. This apparatus uses hexane as a CO 2 gas-expanded liquid to controllably precipitate nanoparticles inside a 1L high-pressure reactor. This paper demonstrates the operation of this new apparatus and for the first time shows the successful fractionation results on a system of metal oxide nanoparticles, with initial nanoparticle concentrations in the gram-scale. The analysis of the obtained fractions was performed using transmission electron microscopy and dynamic light scattering. The use of this simple apparatus provides a pathway to separate large quantities of iron oxide nanoparticles based upon their size for use in various industrial applications.

  3. The system architecture for renewable synthetic fuels

    Ridjan, Iva

    To overcome and eventually eliminate the existing heavy fossil fuels in the transport sector, there is a need for new renewable fuels. This transition could lead to large capital costs for implementing the new solutions and a long time frame for establishing the new infrastructure unless a suitable...... and production plants, so it is important to implement it in the best manner possible to ensure an efficient and flexible system. The poster will provide an overview of the steps involved in the production of synthetic fuel and possible solutions for the system architecture based on the current literature...

  4. LISA Mission and System architectures and performances

    Gath, Peter F; Weise, Dennis; Schulte, Hans-Reiner; Johann, Ulrich

    2009-01-01

    In the context of the LISA Mission Formulation Study, the LISA System was studied in detail and a new baseline architecture for the whole mission was established. This new baseline is the result of trade-offs on both, mission and system level. The paper gives an overview of the different mission scenarios and configurations that were studied in connection with their corresponding advantages and disadvantages as well as performance estimates. Differences in the required technologies and their influence on the overall performance budgets are highlighted for all configurations. For the selected baseline concept, a more detailed description of the configuration is given and open issues in the technologies involved are discussed.

  5. LISA Mission and System architectures and performances

    Gath, Peter F; Weise, Dennis; Schulte, Hans-Reiner; Johann, Ulrich, E-mail: peter.gath@astrium.eads.ne [Astrium GmbH Satellites, 88039 Friedrichshafen (Germany)

    2009-03-01

    In the context of the LISA Mission Formulation Study, the LISA System was studied in detail and a new baseline architecture for the whole mission was established. This new baseline is the result of trade-offs on both, mission and system level. The paper gives an overview of the different mission scenarios and configurations that were studied in connection with their corresponding advantages and disadvantages as well as performance estimates. Differences in the required technologies and their influence on the overall performance budgets are highlighted for all configurations. For the selected baseline concept, a more detailed description of the configuration is given and open issues in the technologies involved are discussed.

  6. REST in practice Hypermedia and systems architecture

    Webber, Jim; Robinson, Ian

    2010-01-01

    Why don't typical enterprise projects go as smoothly as projects you develop for the Web? Does the REST architectural style really present a viable alternative for building distributed systems and enterprise-class applications? In this insightful book, three SOA experts provide a down-to-earth explanation of REST and demonstrate how you can develop simple and elegant distributed hypermedia systems by applying the Web's guiding principles to common enterprise computing problems. You'll learn techniques for implementing specific Web technologies and patterns to solve the needs of a typical com

  7. A scalable, self-analyzing digital locking system for use on quantum optics experiments.

    Sparkes, B M; Chrzanowski, H M; Parrain, D P; Buchler, B C; Lam, P K; Symul, T

    2011-07-01

    Digital control of optics experiments has many advantages over analog control systems, specifically in terms of the scalability, cost, flexibility, and the integration of system information into one location. We present a digital control system, freely available for download online, specifically designed for quantum optics experiments that allows for automatic and sequential re-locking of optical components. We show how the inbuilt locking analysis tools, including a white-noise network analyzer, can be used to help optimize individual locks, and verify the long term stability of the digital system. Finally, we present an example of the benefits of digital locking for quantum optics by applying the code to a specific experiment used to characterize optical Schrödinger cat states.

  8. An Architectural Framework for Integrating COTS/GOTS/Legacy Systems

    Gee, Karen

    2000-01-01

    .... To fully realize the DoD's goal, a new architectural framework is needed. This thesis proposes an architectural framework suitable for integrating COTS/GOTS/legacy systems in a distributed, heterogeneous environment...

  9. A novel system architecture for the national integration of electronic health records: a semi-centralized approach.

    AlJarullah, Asma; El-Masri, Samir

    2013-08-01

    The goal of a national electronic health records integration system is to aggregate electronic health records concerning a particular patient at different healthcare providers' systems to provide a complete medical history of the patient. It holds the promise to address the two most crucial challenges to the healthcare systems: improving healthcare quality and controlling costs. Typical approaches for the national integration of electronic health records are a centralized architecture and a distributed architecture. This paper proposes a new approach for the national integration of electronic health records, the semi-centralized approach, an intermediate solution between the centralized architecture and the distributed architecture that has the benefits of both approaches. The semi-centralized approach is provided with a clearly defined architecture. The main data elements needed by the system are defined and the main system modules that are necessary to achieve an effective and efficient functionality of the system are designed. Best practices and essential requirements are central to the evolution of the proposed architecture. The proposed architecture will provide the basis for designing the simplest and the most effective systems to integrate electronic health records on a nation-wide basis that maintain integrity and consistency across locations, time and systems, and that meet the challenges of interoperability, security, privacy, maintainability, mobility, availability, scalability, and load balancing.

  10. A COMPARATIVE STUDY OF SYSTEM NETWORK ARCHITECTURE Vs DIGITAL NETWORK ARCHITECTURE

    Seema; Mukesh Arya

    2011-01-01

    The efficient managing system of sources is mandatory for the successful running of any network. Here this paper describes the most popular network architectures one of developed by IBM, System Network Architecture (SNA) and other is Digital Network Architecture (DNA). As we know that the network standards and protocols are needed for the network developers as well as users. Some standards are The IEEE 802.3 standards (The Institute of Electrical and Electronics Engineers 1980) (LAN), IBM Sta...

  11. Final Report: Enabling Exascale Hardware and Software Design through Scalable System Virtualization

    Bridges, Patrick G.

    2015-02-01

    In this grant, we enhanced the Palacios virtual machine monitor to increase its scalability and suitability for addressing exascale system software design issues. This included a wide range of research on core Palacios features, large-scale system emulation, fault injection, perfomrance monitoring, and VMM extensibility. This research resulted in large number of high-impact publications in well-known venues, the support of a number of students, and the graduation of two Ph.D. students and one M.S. student. In addition, our enhanced version of the Palacios virtual machine monitor has been adopted as a core element of the Hobbes operating system under active DOE-funded research and development.

  12. Progress Report 2008: A Scalable and Extensible Earth System Model for Climate Change Science

    Drake, John B [ORNL; Worley, Patrick H [ORNL; Hoffman, Forrest M [ORNL; Jones, Phil [Los Alamos National Laboratory (LANL)

    2009-01-01

    This project employs multi-disciplinary teams to accelerate development of the Community Climate System Model (CCSM), based at the National Center for Atmospheric Research (NCAR). A consortium of eight Department of Energy (DOE) National Laboratories collaborate with NCAR and the NASA Global Modeling and Assimilation Office (GMAO). The laboratories are Argonne (ANL), Brookhaven (BNL) Los Alamos (LANL), Lawrence Berkeley (LBNL), Lawrence Livermore (LLNL), Oak Ridge (ORNL), Pacific Northwest (PNNL) and Sandia (SNL). The work plan focuses on scalablity for petascale computation and extensibility to a more comprehensive earth system model. Our stated goal is to support the DOE mission in climate change research by helping ... To determine the range of possible climate changes over the 21st century and beyond through simulations using a more accurate climate system model that includes the full range of human and natural climate feedbacks with increased realism and spatial resolution.

  13. Object-oriented integrated approach for the design of scalable ECG systems.

    Boskovic, Dusanka; Besic, Ingmar; Avdagic, Zikrija

    2009-01-01

    The paper presents the implementation of Object-Oriented (OO) integrated approaches to the design of scalable Electro-Cardio-Graph (ECG) Systems. The purpose of this methodology is to preserve real-world structure and relations with the aim to minimize the information loss during the process of modeling, especially for Real-Time (RT) systems. We report on a case study of the design that uses the integration of OO and RT methods and the Unified Modeling Language (UML) standard notation. OO methods identify objects in the real-world domain and use them as fundamental building blocks for the software system. The gained experience based on the strongly defined semantics of the object model is discussed and related problems are analyzed.

  14. Scalable Integrated Multi-Mission Support System Simulator Release 3.0

    Kim, John; Velamuri, Sarma; Casey, Taylor; Bemann, Travis

    2012-01-01

    The Scalable Integrated Multi-mission Support System (SIMSS) is a tool that performs a variety of test activities related to spacecraft simulations and ground segment checks. SIMSS is a distributed, component-based, plug-and-play client-server system useful for performing real-time monitoring and communications testing. SIMSS runs on one or more workstations and is designed to be user-configurable or to use predefined configurations for routine operations. SIMSS consists of more than 100 modules that can be configured to create, receive, process, and/or transmit data. The SIMSS/GMSEC innovation is intended to provide missions with a low-cost solution for implementing their ground systems, as well as significantly reducing a mission s integration time and risk.

  15. weHelp: A Reference Architecture for Social Recommender Systems.

    Sheth, Swapneel; Arora, Nipun; Murphy, Christian; Kaiser, Gail

    2010-01-01

    Recommender systems have become increasingly popular. Most of the research on recommender systems has focused on recommendation algorithms. There has been relatively little research, however, in the area of generalized system architectures for recommendation systems. In this paper, we introduce weHelp : a reference architecture for social recommender systems - systems where recommendations are derived automatically from the aggregate of logged activities conducted by the system's users. Our architecture is designed to be application and domain agnostic. We feel that a good reference architecture will make designing a recommendation system easier; in particular, weHelp aims to provide a practical design template to help developers design their own well-modularized systems.

  16. PanDA Beyond ATLAS : A Scalable Workload Management System For Data Intensive Science

    Borodin, M; The ATLAS collaboration; Jha, S; Golubkov, D; Klimentov, A; Maeno, T; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T

    2014-01-01

    The LHC experiments are today at the leading edge of large scale distributed data-intensive computational science. The LHC's ATLAS experiment processes data volumes which are particularly extreme, over 140 PB to date, distributed worldwide at over of 120 sites. An important element in the success of the exciting physics results from ATLAS is the highly scalable integrated workflow and dataflow management afforded by the PanDA workload management system, used for all the distributed computing needs of the experiment. The PanDA design is not experiment specific and PanDA is now being extended to support other data intensive scientific applications. PanDA was cited as an example of "a high performance, fault tolerant software for fast, scalable access to data repositories of many kinds" during the "Big Data Research and Development Initiative" announcement, a 200 million USD U.S. government investment in tools to handle huge volumes of digital data needed to spur science and engineering discoveries. In this talk...

  17. A Cloud-based Infrastructure and Architecture for Environmental System Research

    Wang, D.; Wei, Y.; Shankar, M.; Quigley, J.; Wilson, B. E.

    2016-12-01

    The present availability of high-capacity networks, low-cost computers and storage devices, and the widespread adoption of hardware virtualization and service-oriented architecture provide a great opportunity to enable data and computing infrastructure sharing between closely related research activities. By taking advantage of these approaches, along with the world-class high computing and data infrastructure located at Oak Ridge National Laboratory, a cloud-based infrastructure and architecture has been developed to efficiently deliver essential data and informatics service and utilities to the environmental system research community, and will provide unique capabilities that allows terrestrial ecosystem research projects to share their software utilities (tools), data and even data submission workflow in a straightforward fashion. The infrastructure will minimize large disruptions from current project-based data submission workflows for better acceptances from existing projects, since many ecosystem research projects already have their own requirements or preferences for data submission and collection. The infrastructure will eliminate scalability problems with current project silos by provide unified data services and infrastructure. The Infrastructure consists of two key components (1) a collection of configurable virtual computing environments and user management systems that expedite data submission and collection from environmental system research community, and (2) scalable data management services and system, originated and development by ORNL data centers.

  18. Space Station data management system architecture

    Mallary, William E.; Whitelaw, Virginia A.

    1987-01-01

    Within the Space Station program, the Data Management System (DMS) functions in a dual role. First, it provides the hardware resources and software services which support the data processing, data communications, and data storage functions of the onboard subsystems and payloads. Second, it functions as an integrating entity which provides a common operating environment and human-machine interface for the operation and control of the orbiting Space Station systems and payloads by both the crew and the ground operators. This paper discusses the evolution and derivation of the requirements and issues which have had significant effect on the design of the Space Station DMS, describes the DMS components and services which support system and payload operations, and presents the current architectural view of the system as it exists in October 1986; one-and-a-half years into the Space Station Phase B Definition and Preliminary Design Study.

  19. The Architecture of Financial Risk Management Systems

    Iosif ZIMAN

    2013-01-01

    Full Text Available The architecture of systems dedicated to risk management is probably one of the more complex tasks to tackle in the world of finance. Financial risk has been at the center of attention since the explosive growth of financial markets and even more so after the 2008 financial crisis. At multiple levels, financial companies, financial regulatory bodies, governments and cross-national regulatory bodies, all have put the subject of financial risk in particular and the way it is calculated, managed, reported and monitored under intense scrutiny. As a result the technology underpinnings which support the implementation of financial risk systems has evolved considerably and has become one of the most complex areas involving systems and technology in the context of the financial industry. We present the main paradigms, require-ments and design considerations when undertaking the implementation of risk system and give examples of user requirements, sample product coverage and performance parameters.

  20. Nova control system: goals, architecture, and system design

    Suski, G.J.; Duffy, J.M.; Gritton, D.G.; Holloway, F.W.; Krammen, J.R.; Ozarski, R.G.; Severyn, J.R.; Van Arsdall, P.J.

    1982-01-01

    The control system for the Nova laser must operate reliably in a harsh pulse power environment and satisfy requirements of technical functionality, flexibility, maintainability and operability. It is composed of four fundamental subsystems: Power Conditioning, Alignment, Laser Diagnostics, and Target Diagnostics, together with a fifth, unifying subsystem called Central Controls. The system architecture utilizes a collection of distributed microcomputers, minicomputers, and components interconnected through high speed fiber optic communications systems. The design objectives, development strategy and architecture of the overall control system and each of its four fundamental subsystems are discussed. Specific hardware and software developments in several areas are also covered

  1. Exploration Medical System Technical Architecture Overview

    Cerro, J.; Rubin, D.; Mindock, J.; Middour, C.; McGuire, K.; Hanson, A.; Reilly, J.; Burba, T.; Urbina, M.

    2018-01-01

    The Exploration Medical Capability (ExMC) Element Systems Engineering (SE) goals include defining the technical system needed to support medical capabilities for a Mars exploration mission. A draft medical system architecture was developed based on stakeholder needs, system goals, and system behaviors, as captured in an ExMC concept of operations document and a system model. This talk will discuss a high-level view of the medical system, as part of a larger crew health and performance system, both of which will support crew during Deep Space Transport missions. Other mission components, such as the flight system, ground system, caregiver, and patient, will be discussed as aspects of the context because the medical system will have important interactions with each. Additionally, important interactions with other aspects of the crew health and performance system are anticipated, such as health & wellness, mission task performance support, and environmental protection. This talk will highlight areas in which we are working with other disciplines to understand these interactions.

  2. Solid State Inflation Balloon Active Deorbiter: Scalable Low-Cost Deorbit System for Small Satellites

    Huang, Adam

    2016-01-01

    The goal of the Solid State Inflation Balloon Active Deorbiter project is to develop and demonstrate a scalable, simple, reliable, and low-cost active deorbiting system capable of controlling the downrange point of impact for the full-range of small satellites from 1 kg to 180 kg. The key enabling technology being developed is the Solid State Gas Generator (SSGG) chip, generating pure nitrogen gas from sodium azide (NaN3) micro-crystals. Coupled with a metalized nonelastic drag balloon, the complete Solid State Inflation Balloon (SSIB) system is capable of repeated inflation/deflation cycles. The SSGG minimizes size, weight, electrical power, and cost when compared to the current state of the art.

  3. A Scalable and Reliable Message Transport Service for the ATLAS Trigger and Data Acquisition System

    Kazarov, A; The ATLAS collaboration; Kolos, S; Lehmann Miotto, G; Soloviev, I

    2014-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) is a large distributed computing system composed of several thousands of interconnected computers and tens of thousands applications. During a run, TDAQ applications produce a lot of control and information messages with variable rates, addressed to TDAQ operators or to other applications. Reliable, fast and accurate delivery of the messages is important for the functioning of the whole TDAQ system. The Message Transport Service (MTS) provides facilities for the reliable transport, the filtering and the routing of the messages, basing on publish-subscribe-notify communication pattern with content-based message filtering. During the ongoing LHC shutdown, the MTS was re-implemented, taking into account important requirements like reliability, scalability and performance, handling of slow subscribers case and also simplicity of the design and the implementation. MTS uses CORBA middleware, a common layer for TDAQ infrastructure, and provides sending/subscribing APIs i...

  4. Using Software Architectures for Designing Distributed Embedded Systems

    Christensen, Henrik Bærbak

    In this paper, we outline an on-going project of designing distributed embedded systems for closed-loop process control. The project is a joint effort between software architecture researchers and developers from two companies that produce commercial embedded process control systems. The project...... has a strong emphasis on software architectural issues and terminology in order to envision, design and analyze design alternatives. We present two results. First, we outline how focusing on software architecture, architectural issues and qualities are beneficial in designing distributed, embedded......, systems. Second, we present two different architectures for closed-loop process control and discuss benefits and reliabilities....

  5. A scalable geometric multigrid solver for nonsymmetric elliptic systems with application to variable-density flows

    Esmaily, M.; Jofre, L.; Mani, A.; Iaccarino, G.

    2018-03-01

    A geometric multigrid algorithm is introduced for solving nonsymmetric linear systems resulting from the discretization of the variable density Navier-Stokes equations on nonuniform structured rectilinear grids and high-Reynolds number flows. The restriction operation is defined such that the resulting system on the coarser grids is symmetric, thereby allowing for the use of efficient smoother algorithms. To achieve an optimal rate of convergence, the sequence of interpolation and restriction operations are determined through a dynamic procedure. A parallel partitioning strategy is introduced to minimize communication while maintaining the load balance between all processors. To test the proposed algorithm, we consider two cases: 1) homogeneous isotropic turbulence discretized on uniform grids and 2) turbulent duct flow discretized on stretched grids. Testing the algorithm on systems with up to a billion unknowns shows that the cost varies linearly with the number of unknowns. This O (N) behavior confirms the robustness of the proposed multigrid method regarding ill-conditioning of large systems characteristic of multiscale high-Reynolds number turbulent flows. The robustness of our method to density variations is established by considering cases where density varies sharply in space by a factor of up to 104, showing its applicability to two-phase flow problems. Strong and weak scalability studies are carried out, employing up to 30,000 processors, to examine the parallel performance of our implementation. Excellent scalability of our solver is shown for a granularity as low as 104 to 105 unknowns per processor. At its tested peak throughput, it solves approximately 4 billion unknowns per second employing over 16,000 processors with a parallel efficiency higher than 50%.

  6. Mapping a classification system to architectural education

    Hermund, Anders; Klint, Lars; Rostrup, Nicolai

    2015-01-01

    This paper examines to what extent a new classification system, Cuneco Classification System, CCS, proves useful in the education of architects, and to what degree the aim of an architectural education, rather based on an arts and crafts approach than a polytechnic approach, benefits from...... the distinct terminology of the classification system. The method used to examine the relationship between education, practice and the CCS bifurcates in a quantitative and a qualitative exploration: Quantitative comparison of the curriculum with the students’ own descriptions of their studies through...... a questionnaire survey among 88 students in graduate school. Qualitative interviews with a handful of practicing architects, to be able to cross check the relevance of the education with the profession. The examination indicates the need of a new definition, in addition to the CCS’s scale, covering the earliest...

  7. Architecture of a software quench management system

    Jerzy M. Nogiec et al.

    2001-01-01

    Testing superconducting accelerator magnets is inherently coupled with the proper handling of quenches; i.e., protecting the magnet and characterizing the quench process. Therefore, software implementations must include elements of both data acquisition and real-time controls. The architecture of the quench management software developed at Fermilab's Magnet Test Facility is described. This system consists of quench detection, quench protection, and quench characterization components that execute concurrently in a distributed system. Collaboration between the elements of quench detection, quench characterization and current control are discussed, together with a schema of distributed saving of various quench-related data. Solutions to synchronization and reliability in such a distributed quench system are also presented

  8. Control Architecture for Future Power Systems

    Heussen, Kai

    for assessment of control architecture of electric power systems with a means-ends perspective. Given this purpose-oriented understanding of a power system, the increasingly stochastic nature of this problem shall be addressed and approaches for robust, distributed control will be proposed and analyzed....... The introduction of close-to-real-time markets is envisioned to enable fast distributed resource allocation while guaranteeing system stability. Electric vehicles will be studied as a means of distributed reversible energy storage and a flexible power electronic interface, with application to the case......This project looks at control of future electric power grids with a high proportion of wind power and a large number of decentralized power generation, consumption and storage units participating to form a reliable supply of electrical energy. The first objective is developing a method...

  9. Systems approaches to study root architecture dynamics

    Candela eCuesta

    2013-12-01

    Full Text Available The plant root system is essential for providing anchorage to the soil, supplying minerals and water, and synthesizing metabolites. It is a dynamic organ modulated by external cues such as environmental signals, water and nutrients availability, salinity and others. Lateral roots are initiated from the primary root post-embryonically, after which they progress through discrete developmental stages which can be independently controlled, providing a high level of plasticity during root system formation.Within this review, main contributions are presented, from the classical forward genetic screens to the more recent high-throughput approaches, combined with computer model predictions, dissecting how lateral roots and thereby root system architecture is established and developed.

  10. Understanding the Lunar System Architecture Design Space

    Arney, Dale C.; Wilhite, Alan W.; Reeves, David M.

    2013-01-01

    Based on the flexible path strategy and the desire of the international community, the lunar surface remains a destination for future human exploration. This paper explores options within the lunar system architecture design space, identifying performance requirements placed on the propulsive system that performs Earth departure within that architecture based on existing and/or near-term capabilities. The lander crew module and ascent stage propellant mass fraction are primary drivers for feasibility in multiple lander configurations. As the aggregation location moves further out of the lunar gravity well, the lunar lander is required to perform larger burns, increasing the sensitivity to these two factors. Adding an orbit transfer stage to a two-stage lunar lander and using a large storable stage for braking with a one-stage lunar lander enable higher aggregation locations than Low Lunar Orbit. Finally, while using larger vehicles enables a larger feasible design space, there are still feasible scenarios that use three launches of smaller vehicles.

  11. Architecture of WEST plasma control system

    Ravenel, N.; Nouailletas, R.; Barana, O.; Brémond, S.; Moreau, P.; Guillerminet, B.; Balme, S.; Allegretti, L.; Mannori, S.

    2014-01-01

    To operate advanced plasma scenario (long pulse with high stored energy) in present and future tokamak devices under safe operation conditions, the control requirements of the plasma control system (PCS) leads to the development of advanced feedback control and real time handling exceptions. To develop these controllers and these exceptions handling strategies, a project aiming at setting up a flight simulator has started at CEA in 2009. Now, the new WEST (W Environment in Steady-state Tokamak) project deals with modifying Tore Supra into an ITER-like divertor tokamak. This upgrade impacts a lot of systems including Tore Supra PCS and is the opportunity to improve the current PCS architecture to implement the previous works and to fulfill the needs of modern tokamak operation. This paper is dealing with the description of the architecture of WEST PCS. Firstly, the requirements will be presented including the needs of new concepts (segments configuration, alternative (or backup) scenario, …). Then, the conceptual design of the PCS will be described including the main components and their functions. The third part will be dedicated to the proposal RT framework and to the technologies that we have to implement to reach the requirements

  12. Modern system architectures in embedded systems

    Korhonen, T.

    2012-01-01

    Several new technologies are making their way also in embedded systems. In addition to the FPGA technology which has become commonplace, multi-core CPUs and I/O virtualization (the implementation of the tasks of a software hyper-visor in hardware to improve the efficiency) are being introduced to the embedded systems. In this paper we review the trends and discuss how to take advantage of these features in control systems. Some potential application examples like parallelization, data streaming, high-speed data acquisition and virtualization are discussed

  13. Broadband and scalable mobile satellite communication system for future access networks

    Ohata, Kohei; Kobayashi, Kiyoshi; Nakahira, Katsuya; Ueba, Masazumi

    2005-07-01

    Due to the recent market trends, NTT has begun research into next generation satellite communication systems, such as broadband and scalable mobile communication systems. One service application objective is to provide broadband Internet access for transportation systems, temporal broadband access networks and telemetries to remote areas. While these are niche markets the total amount of capacity should be significant. We set a 1-Gb/s total transmission capacity as our goal. Our key concern is the system cost, which means that the system should be unified system with diversified services and not tailored for each application. As satellites account for a large portion of the total system cost, we set the target satellite size as a small, one-ton class dry mass with a 2-kW class payload power. In addition to the payload power and weight, the mobile satellite's frequency band is extremely limited. Therefore, we need to develop innovative technologies that will reduce the weight and maximize spectrum and power efficiency. Another challenge is the need for the system to handle up to 50 dB and a wide data rate range of other applications. This paper describes the key communication system technologies; the frequency reuse strategy, multiplexing scheme, resource allocation scheme, and QoS management algorithm to ensure excellent spectrum efficiency and support a variety of services and quality requirements in the mobile environment.

  14. Developing a System Architecture for Holonic Shop Floor Control

    Sørensen, Christian; Langer, Gilad; Alting, Leo

    1998-01-01

    This paper describes the results of research regarding the emerging theory of Holonic Manufacturing Systems. This theory and in particular its corresponding reference architecture serves as the basis for the development of a system-architecture for shop floor control systems in a multi-cellular c......This paper describes the results of research regarding the emerging theory of Holonic Manufacturing Systems. This theory and in particular its corresponding reference architecture serves as the basis for the development of a system-architecture for shop floor control systems in a multi...

  15. Development of the GEM-TPC X-ray Polarimeter with the Scalable Readout System

    Kitaguchi Takao

    2018-01-01

    Full Text Available We have developed a gaseous Time Projection Chamber (TPC containing a single-layered foil of a gas electron multiplier (GEM to open up a new window on cosmic X-ray polarimetry in the 2–10 keV band. The micro-pattern TPC polarimeter in combination with the Scalable Readout System produced by the RD51 collaboration has been built as an engineering model to optimize detector parameters and improve polarimeter sensitivity. The polarimeter was characterized with unpolarized X-rays from an X-ray generator in a laboratory and polarized X-rays on the BL32B2 beamline at the SPring-8 synchrotron radiation facility. Preliminary results show that the polarimeter has a comparable modulation factor to a prototype of the flight one.

  16. Methodology Used to Create System Architecture for its in Slovakia

    Ales Janota

    2004-01-01

    Full Text Available The paper deals with an object oriented approach proposed by the authors for creation of the ITS system architecture in the Slovak Republic and shows how a reference architecture can be created as s base for future more detailed architectures (models. The authors characterise possible approaches, explain their relations to existing architectures and propose a methodology based on the Unifield Modelling language (UML. The main attention is paid to the logical part (logical view of the system architecture, that should result in the form of easy readable and understandable UML models.

  17. Scalable Resolution Display Walls

    Leigh, Jason; Johnson, Andrew; Renambot, Luc; Peterka, Tom; Jeong, Byungil; Sandin, Daniel J.; Talandis, Jonas; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung; Sun, Yiwen

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  18. Approaches for scalable modeling and emulation of cyber systems : LDRD final report.

    Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Don W.

    2009-09-01

    The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.

  19. A scalable system for production of functional pancreatic progenitors from human embryonic stem cells.

    Thomas C Schulz

    Full Text Available Development of a human embryonic stem cell (hESC-based therapy for type 1 diabetes will require the translation of proof-of-principle concepts into a scalable, controlled, and regulated cell manufacturing process. We have previously demonstrated that hESC can be directed to differentiate into pancreatic progenitors that mature into functional glucose-responsive, insulin-secreting cells in vivo. In this study we describe hESC expansion and banking methods and a suspension-based differentiation system, which together underpin an integrated scalable manufacturing process for producing pancreatic progenitors. This system has been optimized for the CyT49 cell line. Accordingly, qualified large-scale single-cell master and working cGMP cell banks of CyT49 have been generated to provide a virtually unlimited starting resource for manufacturing. Upon thaw from these banks, we expanded CyT49 for two weeks in an adherent culture format that achieves 50-100 fold expansion per week. Undifferentiated CyT49 were then aggregated into clusters in dynamic rotational suspension culture, followed by differentiation en masse for two weeks with a four-stage protocol. Numerous scaled differentiation runs generated reproducible and defined population compositions highly enriched for pancreatic cell lineages, as shown by examining mRNA expression at each stage of differentiation and flow cytometry of the final population. Islet-like tissue containing glucose-responsive, insulin-secreting cells was generated upon implantation into mice. By four- to five-months post-engraftment, mature neo-pancreatic tissue was sufficient to protect against streptozotocin (STZ-induced hyperglycemia. In summary, we have developed a tractable manufacturing process for the generation of functional pancreatic progenitors from hESC on a scale amenable to clinical entry.

  20. Emotion based Agent Architectures for Tutoring Systems : The INES Architecture

    Poel, Mannes; op den Akker, Rieks; Heylen, Dirk; Nijholt, Anton; Trappl, Robert

    2004-01-01

    In this paper we discuss our approach to integrate emotions in the agent based tutoring system INES (Intelligent Nursing Education System). First we discuss the INES system where we emphasize the emotional component of the system. Afterwards we show how a more advanced emotion generation

  1. Emotion based Agent Architectures for Tutoring Systems: The INES Architecture

    Poel, Mannes; op den Akker, Hendrikus J.A.; Heylen, Dirk K.J.; Nijholt, Antinus; Trappl, R.

    2004-01-01

    In this paper we discuss our approach to integrate emotions in the agent based tutoring system INES (Intelligent Nursing Education System). First we discuss the INES system where we emphasize the emotional component of the system. Afterwards we show how a more advanced emotion generation

  2. Modelling of control system architecture for next-generation accelerators

    Liu, Shi-Yao; Kurokawa, Shin-ichi

    1990-01-01

    Functional, hardware and software system architectures define the fundamental structure of control systems. Modelling is a protocol of system architecture used in system design. This paper reviews various modellings adopted in past ten years and suggests a new modelling for next generation accelerators. (author)

  3. Migration-induced architectures of planetary systems.

    Szuszkiewicz, Ewa; Podlewska-Gaca, Edyta

    2012-06-01

    The recent increase in number of known multi-planet systems gives a unique opportunity to study the processes responsible for planetary formation and evolution. Special attention is given to the occurrence of mean-motion resonances, because they carry important information about the history of the planetary systems. At the early stages of the evolution, when planets are still embedded in a gaseous disc, the tidal interactions between the disc and planets cause the planetary orbital migration. The convergent differential migration of two planets embedded in a gaseous disc may result in the capture into a mean-motion resonance. The orbital migration taking place during the early phases of the planetary system formation may play an important role in shaping stable planetary configurations. An understanding of this stage of the evolution will provide insight on the most frequently formed architectures, which in turn are relevant for determining the planet habitability. The aim of this paper is to present the observational properties of these planetary systems which contain confirmed or suspected resonant configurations. A complete list of known systems with such configurations is given. This list will be kept by us updated from now on and it will be a valuable reference for studying the dynamics of extrasolar systems and testing theoretical predictions concerned with the origin and the evolution of planets, which are the most plausible places for existence and development of life.

  4. Communication System Architectures for Missions to Mars - A Preliminary Investigation

    Nguyen, T.; Hinedi, S.; Martin, W.; Tsou, H.

    1995-01-01

    This paper presents various communication system architectures for Multiple-Link communications with Single Aperture (MULSA) ground station. The proposed architectures are capable of supporting a multiplicity of spacecraft that are within the beamwidth of a single ground station antenna simultaneously. Both short and long term proposals to address this scenario will be discussed. In addition, the paper also discusses the top-level system designs of the proposed architectures and attempts to identify the associated advantages and disadvantages for each system.

  5. SCALABLE PHOTOGRAMMETRIC MOTION CAPTURE SYSTEM “MOSCA”: DEVELOPMENT AND APPLICATION

    V. A. Knyaz

    2015-05-01

    Full Text Available Wide variety of applications (from industrial to entertainment has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  6. Design requirements of communication architecture of SMART safety system

    Park, H. Y.; Kim, D. H.; Sin, Y. C.; Lee, J. Y.

    2001-01-01

    To develop the communication network architecture of safety system of SMART, the evaluation elements for reliability and performance factors are extracted from commercial networks and classified the required-level by importance. A predictable determinacy, status and fixed based architecture, separation and isolation from other systems, high reliability, verification and validation are introduced as the essential requirements of safety system communication network. Based on the suggested requirements, optical cable, star topology, synchronous transmission, point-to-point physical link, connection-oriented logical link, MAC (medium access control) with fixed allocation are selected as the design elements. The proposed architecture will be applied as basic communication network architecture of SMART safety system

  7. Architectural Refinement for the Design of Survivable Systems

    Ellison, Robert

    2001-01-01

    This paper describes a process for systematically refining an enterprise system architecture to resist recognize and recover from deliberate, malicious attacks by applying reusable design primitives...

  8. An architecture for robotic system integration

    Butler, P.L.; Reister, D.B.; Gourley, C.S.; Thayer, S.M.

    1993-01-01

    An architecture has been developed to provide an object-oriented framework for the integration of multiple robotic subsystems into a single integrated system. By using an object-oriented approach, all subsystems can interface with each other, and still be able to be customized for specific subsystem interface needs. The object-oriented framework allows the communications between subsystems to be hidden from the interface specification itself. Thus, system designers can concentrate on what the subsystems are to do, not how to communicate. This system has been developed for the Environmental Restoration and Waste Management Decontamination and Decommissioning Project at Oak Ridge National Laboratory. In this system, multiple subsystems are defined to separate the functional units of the integrated system. For example, a Human-Machine Interface (HMI) subsystem handles the high-level machine coordination and subsystem status display. The HMI also provides status-logging facilities and safety facilities for use by the remaining subsystems. Other subsystems have been developed to provide specific functionality, and many of these can be reused by other projects

  9. Reactive wavepacket dynamics for four atom systems on scalable parallel computers

    Goldfield, E.M.

    1994-01-01

    While time-dependent quantum mechanics has been successfully applied to many three atom systems, it was nevertheless a computational challenge to use wavepacket methods to study four atom systems, systems with several heavy atoms, and systems with deep potential wells. S.K. Gray and the author are studying the reaction of OH + CO ↔ (HOCO) ↔ H + CO 2 , a difficult reaction by all the above criteria. Memory considerations alone made it impossible to use a single IBM RS/6000 workstation to study a four degree-of-freedom model of this system. They have developed a scalable parallel wavepacket code for the IBM SP1 and have run it on the SP1 at Argonne and at the Cornell Theory Center. The wavepacket, defined on a four dimensional grid, is spread out among the processors. Two-dimensional FFT's are used to compute the kinetic energy operator acting on the wavepacket. Accomplishing this task, which is the computationally intensive part of the calculation, requires a global transpose of the data. This transpose is the only serious communication between processors. Since the problem is essentially data-parallel, communication is regular and load-balancing is excellent. But as the problem is moderately fine-grained and messages are long, the ratio of communication to computation is somewhat high and they typically get about 55% of ideal speed-up

  10. Evaluating Nonclinical Performance of the Academic Pathologist: A Comprehensive, Scalable, and Flexible System for Leadership Use.

    Wiles, Austin Blackburn; Idowu, Michael O; Clevenger, Charles V; Powers, Celeste N

    2018-01-01

    Academic pathologists perform clinical duties, as well as valuable nonclinical activities. Nonclinical activities may consist of research, teaching, and administrative management among many other important tasks. While clinical duties have many clear metrics to measure productivity, like the relative value units of Medicare reimbursement, nonclinical performance is often difficult to measure. Despite the difficulty of evaluating nonclinical activities, nonclinical productivity is used to determine promotion, funding, and inform professional evaluations of performance. In order to better evaluate the important nonclinical performance of academic pathologists, we present an evaluation system for leadership use. This system uses a Microsoft Excel workbook to provide academic pathologist respondents and reviewing leadership a transparent, easy-to-complete system that is both flexible and scalable. This system provides real-time feedback to academic pathologist respondents and a clear executive summary that allows for focused guidance of the respondent. This system may be adapted to fit practices of varying size, measure performance differently based on years of experience, and can work with many different institutional values.

  11. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  12. Architecture

    Clear, Nic

    2014-01-01

    When discussing science fiction’s relationship with architecture, the usual practice is to look at the architecture “in” science fiction—in particular, the architecture in SF films (see Kuhn 75-143) since the spaces of literary SF present obvious difficulties as they have to be imagined. In this essay, that relationship will be reversed: I will instead discuss science fiction “in” architecture, mapping out a number of architectural movements and projects that can be viewed explicitly as scien...

  13. ICAROUS: Integrated Configurable Architecture for Unmanned Systems

    Consiglio, Maria C.

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This video describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the auspices of the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and autonomous detect and avoid functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  14. Information System Architectures: Representation, Planning and Evaluation

    André Vasconcelos

    2003-12-01

    Full Text Available In recent years organizations have been faced with increasingly demanding business environments - pushed by factors like market globalization, need for product and service innovation and product life cycle reduction - and with new information technologies changes and opportunities- such as the Component-off-the-shelf paradigm, the telecommunications improvement or the Enterprise Systems off-the-shelf modules availability - all of which impose a continuous redraw and reorganization of business strategies and processes. Nowadays, Information Technology makes possible high-speed, efficient and low cost access to the enterprise information, providing the means for business processes automation and improvement. In spite of these important technological progresses, information systems that support business, do not usually answer efficiently enough to the continuous demands that organizations are faced with, causing non-alignment between business and information technologies (IT and therefore reducing organization competitive abilities. This article discusses the vital role that the definition of an Information System Architecture (ISA has in the development of Enterprise Information Systems that are capable of staying fully aligned with organization strategy and business needs. In this article the authors propose a restricted collection of founding and basis operations, which will provide the conceptual paradigm and tools for proper ISA handling. These tools are then used in order to represent, plan and evaluate an ISA of a Financial Group.

  15. Jpss System Architecture Npp to the Future

    Furgerson, J.; Trumbower, G.

    2012-12-01

    The National Oceanic and Atmospheric Administration (NOAA) is acquiring the next-generation weather and environmental satellite system, named the Joint Polar Satellite System (JPSS). The National Aeronautics and Space Administration (NASA) serves as the acquisition and development agent. JPSS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA in the 1330 local time of ascending node (LTAN) orbit. The Suomi National Polar-orbiting Partnership (NPP) was launched into the 1330 LTAN orbit on October 28, 2011, and carries advanced sensors which will be featured on JPSS. It serves as a bridge mission and provides continuity for the NASA Earth Observation System and the POES. JPSS-1 is scheduled to launch in 2017. The Defense Meteorological Satellite Program (DMSP) managed by the DoD is operating in the 1730 LTAN orbit. The DoD is developing the Defense Weather Satellite Follow-on (WSF) system which will continue in the 1730 orbit. NASA is developing the Common Ground System (CGS) with the capability to process data from both the JPSS and WSF constellations. The CGS will be operated by NOAA. This poster will provide a top level status update of the program, as well as an overview of the JPSS system architecture. The space segment carries a suite of sensors that collect meteorological, oceanographic, and climatological observations of the earth and atmosphere. The system design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users through a Command, Control, and Communication Segment (C3S). The data processing for NPP/JPSS is accomplished through an Interface Data Processing Segment (IDPS)/Field Terminal Segment (FTS) that processes NPP/JPSS satellite data to provide environmental data products to NOAA and DoD processing centers as well as remote terminal users.

  16. Authentication Architecture for Region-Wide e-Health System with Smartcards and a PKI

    Zúquete, André; Gomes, Helder; Cunha, João Paulo Silva

    This paper describes the design and implementation of an e-Health authentication architecture using smartcards and a PKI. This architecture was developed to authenticate e-Health Professionals accessing the RTS (Rede Telemática da Saúde), a regional platform for sharing clinical data among a set of affiliated health institutions. The architecture had to accommodate specific RTS requirements, namely the security of Professionals' credentials, the mobility of Professionals, and the scalability to accommodate new health institutions. The adopted solution uses short-lived certificates and cross-certification agreements between RTS and e-Health institutions for authenticating Professionals accessing the RTS. These certificates carry as well the Professional's role at their home institution for role-based authorization. Trust agreements between e-Health institutions and RTS are necessary in order to make the certificates recognized by the RTS. As a proof of concept, a prototype was implemented with Windows technology. The presented authentication architecture is intended to be applied to other medical telematic systems.

  17. Joint Polar Satellite System (JPSS) Common Ground System (CGS) Technical Performance Measures of the Block 2 Architecture

    Grant, K. D.; Panas, M.

    2016-12-01

    NOAA and NASA are jointly acquiring the next-generation civilian weather satellite system: the Joint Polar Satellite System (JPSS). JPSS replaced the afternoon orbit component and ground processing of NOAA's old POES system. JPSS satellites carry sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS). Developed and maintained by Raytheon Intelligence, Information and Services (IIS), the CGS is a globally distributed, multi-mission system serving NOAA, NASA and their national and international partners. The CGS has demonstrated its scalability and flexibility to incorporate multiple missions efficiently and with minimal cost, schedule and risk, while strengthening global partnerships in weather and environmental monitoring. The CGS architecture has been upgraded to Block 2.0 to satisfy several key objectives, including: "operationalizing" the first satellite, Suomi NPP, which originally was a risk reduction mission; leveraging lessons learned in multi-mission support, taking advantage of newer, more reliable and efficient technologies and satisfying constraints due of the continually evolving budgetary environment. To ensure the CGS meets these needs, we have developed 48 Technical Performance Measures (TPMs) across 9 categories: Data Availability, Data Latency, Operational Availability, Margin, Scalability, Situational Awareness, Transition (between environments and sites), WAN Efficiency, and Data Recovery Processing. This paper will provide an overview of the CGS Block 2.0 architecture, with particular focus on the 9 TPM categories listed above. We will describe how we ensure the deployed architecture meets these TPMs to satisfy our multi-mission objectives with the deployment of Block 2.0.

  18. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  19. SAFARI optical system architecture and design concept

    Pastor, Carmen; Jellema, Willem; Zuluaga-Ramírez, Pablo; Arrazola, David; Fernández-Rodriguez, M.; Belenguer, Tomás.; González Fernández, Luis M.; Audley, Michael D.; Evers, Jaap; Eggens, Martin; Torres Redondo, Josefina; Najarro, Francisco; Roelfsema, Peter

    2016-07-01

    SpicA FAR infrared Instrument, SAFARI, is one of the instruments planned for the SPICA mission. The SPICA mission is the next great leap forward in space-based far-infrared astronomy and will study the evolution of galaxies, stars and planetary systems. SPICA will utilize a deeply cooled 2.5m-class telescope, provided by European industry, to realize zodiacal background limited performance, and high spatial resolution. The instrument SAFARI is a cryogenic grating-based point source spectrometer working in the wavelength domain 34 to 230 μm, providing spectral resolving power from 300 to at least 2000. The instrument shall provide low and high resolution spectroscopy in four spectral bands. Low Resolution mode is the native instrument mode, while the high Resolution mode is achieved by means of a Martin-Pupplet interferometer. The optical system is all-reflective and consists of three main modules; an input optics module, followed by the Band and Mode Distributing Optics and the grating Modules. The instrument utilizes Nyquist sampled filled linear arrays of very sensitive TES detectors. The work presented in this paper describes the optical design architecture and design concept compatible with the current instrument performance and volume design drivers.

  20. A Layered Active Memory Architecture for Cognitive Vision Systems

    Kolonias, Ilias; Christmas, William; Kittler, Josef

    2007-01-01

    Recognising actions and objects from video material has attracted growing research attention and given rise to important applications. However, injecting cognitive capabilities into computer vision systems requires an architecture more elaborate than the traditional signal processing paradigm for information processing. Inspired by biological cognitive systems, we present a memory architecture enabling cognitive processes (such as selecting the processes required for scene understanding, laye...

  1. Model-based safety architecture framework for complex systems

    Schuitemaker, Katja; Rajabali Nejad, Mohammadreza; Braakhuis, J.G.; Podofillini, Luca; Sudret, Bruno; Stojadinovic, Bozidar; Zio, Enrico; Kröger, Wolfgang

    2015-01-01

    The shift to transparency and rising need of the general public for safety, together with the increasing complexity and interdisciplinarity of modern safety-critical Systems of Systems (SoS) have resulted in a Model-Based Safety Architecture Framework (MBSAF) for capturing and sharing architectural

  2. Ergatis: a web interface and scalable software system for bioinformatics workflows

    Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.

    2010-01-01

    Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634

  3. Steps Towards Scalable and Modularized Flight Software for Unmanned Aircraft Systems

    Johann C. Dauer

    2014-05-01

    Full Text Available Unmanned aircraft (UA applications impose a variety of computing tasks on the on-board computer system. From a research perspective, it is often more convenient to evaluate algorithms on bigger aircraft as they are capable of lifting heavier loads and thus more powerful computational units. On the other hand, smaller systems are often less expensive and operation is less restricted in many countries. This paper thus presents a conceptual design for flight software that can be evaluated on the UA of convenient size. The integration effort required to transfer the algorithm to different sized UA is significantly reduced. This scalability is achieved by using exchangeable payload modules and a flexible process distribution on different processing units. The presented approach is discussed using the example of the flight software of a 14 kg unmanned helicopter and an equivalent of 1.5 kg. The proof of concept is shown by means of flight performance in a hardware-in-the-loop simulation.

  4. A Scalable and Extensible Earth System Model for Climate Change Science

    Gent, Peter; Lamarque, Jean-Francois; Conley, Andrew; Vertenstein, Mariana; Craig, Anthony

    2013-02-13

    The objective of this award was to build a scalable and extensible Earth System Model that can be used to study climate change science. That objective has been achieved with the public release of the Community Earth System Model, version 1 (CESM1). In particular, the development of the CESM1 atmospheric chemistry component was substantially funded by this award, as was the development of the significantly improved coupler component. The CESM1 allows new climate change science in areas such as future air quality in very large cities, the effects of recovery of the southern hemisphere ozone hole, and effects of runoff from ice melt in the Greenland and Antarctic ice sheets. Results from a whole series of future climate projections using the CESM1 are also freely available via the web from the CMIP5 archive at the Lawrence Livermore National Laboratory. Many research papers using these results have now been published, and will form part of the 5th Assessment Report of the United Nations Intergovernmental Panel on Climate Change, which is to be published late in 2013.

  5. Simulation system architecture design for generic communications link

    Tsang, Chit-Sang; Ratliff, Jim

    1986-01-01

    This paper addresses a computer simulation system architecture design for generic digital communications systems. It addresses the issues of an overall system architecture in order to achieve a user-friendly, efficient, and yet easily implementable simulation system. The system block diagram and its individual functional components are described in detail. Software implementation is discussed with the VAX/VMS operating system used as a target environment.

  6. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  7. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  8. Distributed real time data processing architecture for the TJ-II data acquisition system

    Ruiz, M.; Barrera, E.; Lopez, S.; Machon, D.; Vega, J.; Sanchez, E.

    2004-01-01

    This article describes the performance of a new model of architecture that has been developed for the TJ-II data acquisition system in order to increase its real time data processing capabilities. The current model consists of several compact PCI extension for instrumentation (PXI) standard chassis, each one with various digitizers. In this architecture, the data processing capability is restricted to the PXI controller's own performance. The controller must share its CPU resources between the data processing and the data acquisition tasks. In the new model, distributed data processing architecture has been developed. The solution adds one or more processing cards to each PXI chassis. This way it is possible to plan how to distribute the data processing of all acquired signals among the processing cards and the available resources of the PXI controller. This model allows scalability of the system. More or less processing cards can be added based on the requirements of the system. The processing algorithms are implemented in LabVIEW (from National Instruments), providing efficiency and time-saving application development when compared with other efficient solutions

  9. Advanced Ground Systems Maintenance Enterprise Architecture Project

    Perotti, Jose M. (Compiler)

    2015-01-01

    The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. The delivered capabilities include anomaly detection, fault isolation, prognostics and physics based diagnostics.

  10. A wireless, compact, and scalable bioimpedance measurement system for energy-efficient multichannel body sensor solutions

    Ramos, J; Ausín, J L; Lorido, A M; Redondo, F; Duque-Carrillo, J F

    2013-01-01

    In this paper, we present the design, realization and evaluation of a multichannel measurement system based on a cost-effective high-performance integrated circuit for electrical bioimpedance (EBI) measurements in the frequency range from 1 kHz to 1 MHz, and a low-cost commercially available radio frequency transceiver device, which provides reliable wireless communication. The resulting on-chip spectrometer provides high measuring EBI capabilities and constitutes the basic node to built EBI wireless sensor networks (EBI-WSNs). The proposed EBI-WSN behaves as a high-performance wireless multichannel EBI spectrometer where the number of nodes, i.e., number of channels, is completely scalable to satisfy specific requirements of body sensor networks. One of its main advantages is its versatility, since each EBI node is independently configurable and capable of working simultaneously. A prototype of the EBI node leads to a very small printed circuit board of approximately 8 cm 2 including chip-antenna, which can operate several years on one 3-V coin cell battery. A specifically tailored graphical user interface (GUI) for EBI-WSN has been also designed and implemented in order to configure the operation of EBI nodes and the network topology. EBI analysis parameters, e.g., single-frequency or spectroscopy, time interval, analysis by EBI events, frequency and amplitude ranges of the excitation current, etc., are defined by the GUI.

  11. A quantum CISC compiler and scalable assembler for quantum computing on large systems

    Schulte-Herbrueggen, Thomas; Spoerl, Andreas; Glaser, Steffen [Dept. Chemistry, Technical University of Munich (TUM), 85747 Garching (Germany)

    2008-07-01

    Using the cutting edge high-speed parallel cluster HLRB-II (with a total LINPACK performance of 63.3 TFlops/s) we present a quantum CISC compiler into time-optimised or decoherence-protected complex instruction sets. They comprise effective multi-qubit interactions with up to 10 qubits. We show how to assemble these medium-sized CISC-modules in a scalable way for quantum computation on large systems. Extending the toolbox of universal gates by optimised complex multi-qubit instruction sets paves the way to fight decoherence in realistic Markovian and non-Markovian settings. The advantage of quantum CISC compilation over standard RISC compilations into one- and two-qubit universal gates is demonstrated inter alia for the quantum Fourier transform (QFT) and for multiply-controlled NOT gates. The speed-up is up to factor of six thus giving significantly better performance under decoherence. - Implications for upper limits to time complexities are also derived.

  12. Applications of an architecture design and assessment system (ADAS)

    Gray, F. Gail; Debrunner, Linda S.; White, Tennis S.

    1988-01-01

    A new Architecture Design and Assessment System (ADAS) tool package is introduced, and a range of possible applications is illustrated. ADAS was used to evaluate the performance of an advanced fault-tolerant computer architecture in a modern flight control application. Bottlenecks were identified and possible solutions suggested. The tool was also used to inject faults into the architecture and evaluate the synchronization algorithm, and improvements are suggested. Finally, ADAS was used as a front end research tool to aid in the design of reconfiguration algorithms in a distributed array architecture.

  13. Service-Oriented Architecture Approach to MAGTF Logistics Support Systems

    2013-09-01

    Support System-Marine Corps IT Information Technology KPI Key Performance Indicators LCE Logistics Command Element ITV In-transit Visibility LCM...building blocks, options, KPI (key performance indicators), design decisions and the corresponding; the physical attributes which is the second attribute... KPI ) that they impact. h. Layer 8 (Information Architecture) The business intelligence layer and information architecture safeguards the inclusion

  14. The Flatworld Simulation Control Architecture (FSCA): A Framework for Scalable Immersive Visualization Systems

    2004-12-01

    handling using the X10 home automation protocol. Each 3D graphics client renders its scene according to an assigned virtual camera position. By having...control protocol. DMX is a versatile and robust framework which overcomes limitations of the X10 home automation protocol which we are currently using

  15. Development of a modular and scalable sensor system for the gathering of position and orientation of moved objects

    Klingbeil, L.

    2006-02-01

    A modular and scalable sensor system for the estimation of position and orientation of moving objects has been developed and characterized. A sensor unit, which is mounted to the moving object, consists of acceleration -, angular rate - and magnetic field sensors for every spatial axis. Customized Kalman filter algorithms provide a robust and low latency reconstruction of the sensor's orientation. Additionally an ultrasound transducer network is used to measure the distance of a sensor unit with respect to several reference points in the room. This allows reconstruction of the absolute position using trilateration methods. The system is scalable with respect to the number of sensor units and the covered tracking volume. It is suitable for various applications for example the analysis of body movements or head tracking in augmented or virtual reality environments. (orig.)

  16. An integrated decision-making framework for transportation architectures: Application to aviation systems design

    Lewe, Jung-Ho

    The National Transportation System (NTS) is undoubtedly a complex system-of-systems---a collection of diverse 'things' that evolve over time, organized at multiple levels, to achieve a range of possibly conflicting objectives, and never quite behaving as planned. The purpose of this research is to develop a virtual transportation architecture for the ultimate goal of formulating an integrated decision-making framework. The foundational endeavor begins with creating an abstraction of the NTS with the belief that a holistic frame of reference is required to properly study such a multi-disciplinary, trans-domain system. The culmination of the effort produces the Transportation Architecture Field (TAF) as a mental model of the NTS, in which the relationships between four basic entity groups are identified and articulated. This entity-centric abstraction framework underpins the construction of a virtual NTS couched in the form of an agent-based model. The transportation consumers and the service providers are identified as adaptive agents that apply a set of preprogrammed behavioral rules to achieve their respective goals. The transportation infrastructure and multitude of exogenous entities (disruptors and drivers) in the whole system can also be represented without resorting to an extremely complicated structure. The outcome is a flexible, scalable, computational model that allows for examination of numerous scenarios which involve the cascade of interrelated effects of aviation technology, infrastructure, and socioeconomic changes throughout the entire system.

  17. Control system architecture: The standard and non-standard models

    Thuot, M.E.; Dalesio, L.R.

    1993-01-01

    Control system architecture development has followed the advances in computer technology through mainframes to minicomputers to micros and workstations. This technology advance and increasingly challenging accelerator data acquisition and automation requirements have driven control system architecture development. In summarizing the progress of control system architecture at the last International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) B. Kuiper asserted that the system architecture issue was resolved and presented a ''standard model''. The ''standard model'' consists of a local area network (Ethernet or FDDI) providing communication between front end microcomputers, connected to the accelerator, and workstations, providing the operator interface and computational support. Although this model represents many present designs, there are exceptions including reflected memory and hierarchical architectures driven by requirements for widely dispersed, large channel count or tightly coupled systems. This paper describes the performance characteristics and features of the ''standard model'' to determine if the requirements of ''non-standard'' architectures can be met. Several possible extensions to the ''standard model'' are suggested including software as well as the hardware architectural feature

  18. Control system architecture: The standard and non-standard models

    Thuot, M.E.; Dalesio, L.R.

    1993-01-01

    Control system architecture development has followed the advances in computer technology through mainframes to minicomputers to micros and workstations. This technology advance and increasingly challenging accelerator data acquisition and automation requirements have driven control system architecture development. In summarizing the progress of control system architecture at the last International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS) B. Kuiper asserted that the system architecture issue was resolved and presented a open-quotes standard modelclose quotes. The open-quotes standard modelclose quotes consists of a local area network (Ethernet or FDDI) providing communication between front end microcomputers, connected to the accelerator, and workstations, providing the operator interface and computational support. Although this model represents many present designs, there are exceptions including reflected memory and hierarchical architectures driven by requirements for widely dispersed, large channel count or tightly coupled systems. This paper describes the performance characteristics and features of the open-quotes standard modelclose quotes to determine if the requirements of open-quotes non-standardclose quotes architectures can be met. Several possible extensions to the open-quotes standard modelclose quotes are suggested including software as well as the hardware architectural features

  19. INFORMATION SYSTEM STRATEGIC PLANNING WITH ENTERPRISE ARCHITECTURE PLANNING

    Lola Yorita Astri

    2013-05-01

    Full Text Available An integrated information system is needed in an enterprise to support businessprocesses run by an enterprise. Therefore, to develop information system can use enterprisearchitecture approach which can define strategic planning of enterprise information system. SMPNegeri 1 Jambi can be viewed as an enterprise because there are entities that should be managedthrough an integrated information system. Since there has been no unification of different elementsin a unity yet, enterprise architecture model using Enterprise Architecture Planning (EAP isneeded which will obtain strategic planning of enterprise information system in SMP Negeri 1Jambi. The goal of strategic planning of information system with Enterprise Architecture Planning(EAP is to define primary activities run by SMP Negeri 1 Jambi and support activities supportingprimary activities. They can be used as a basis for making data architecture which is the entities ofapplication architecture. At last, technology architecture is designed to describe technology neededto provide environment for data application. The plan of implementation is the activity plan madeto implemented architectures by enterprise.

  20. Scalable coherent interface

    Alnaes, K.; Kristiansen, E.H.; Gustavson, D.B.; James, D.V.

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs

  1. Design and Analysis of Architectures for Structural Health Monitoring Systems

    Mukkamala, Ravi; Sixto, S. L. (Technical Monitor)

    2002-01-01

    During the two-year project period, we have worked on several aspects of Health Usage and Monitoring Systems for structural health monitoring. In particular, we have made contributions in the following areas. 1. Reference HUMS architecture: We developed a high-level architecture for health monitoring and usage systems (HUMS). The proposed reference architecture is shown. It is compatible with the Generic Open Architecture (GOA) proposed as a standard for avionics systems. 2. HUMS kernel: One of the critical layers of HUMS reference architecture is the HUMS kernel. We developed a detailed design of a kernel to implement the high level architecture.3. Prototype implementation of HUMS kernel: We have implemented a preliminary version of the HUMS kernel on a Unix platform.We have implemented both a centralized system version and a distributed version. 4. SCRAMNet and HUMS: SCRAMNet (Shared Common Random Access Memory Network) is a system that is found to be suitable to implement HUMS. For this reason, we have conducted a simulation study to determine its stability in handling the input data rates in HUMS. 5. Architectural specification.

  2. Control architecture of power systems: Modeling of purpose and function

    Heussen, Kai; Saleem, Arshad; Lind, Morten

    2009-01-01

    Many new technologies with novel control capabilities have been developed in the context of “smart grid” research. However, often it is not clear how these capabilities should best be integrated in the overall system operation. New operation paradigms change the traditional control architecture...... of power systems and it is necessary to identify requirements and functions. How does new control architecture fit with the old architecture? How can power system functions be specified independent of technology? What is the purpose of control in power systems? In this paper, a method suitable...... for semantically consistent modeling of control architecture is presented. The method, called Multilevel Flow Modeling (MFM), is applied to the case of system balancing. It was found that MFM is capable of capturing implicit control knowledge, which is otherwise difficult to formalize. The method has possible...

  3. Designing an architectural style for Pervasive Healthcare systems.

    Rafe, Vahid; Hajvali, Masoumeh

    2013-04-01

    Nowadays, the Pervasive Healthcare (PH) systems are considered as an important research area. These systems have a dynamic structure and configuration. Therefore, an appropriate method for designing such systems is necessary. The Publish/Subscribe Architecture (pub/sub) is one of the convenient architectures to support such systems. PH systems are safety critical; hence, errors can bring disastrous results. To prevent such problems, a powerful analytical tool is required. So using a proper formal language like graph transformation systems for developing of these systems seems necessary. But even if software engineers use such high level methodologies, errors may occur in the system under design. Hence, it should be investigated automatically and formally that whether this model of system satisfies all their requirements or not. In this paper, a dynamic architectural style for developing PH systems is presented. Then, the behavior of these systems is modeled and evaluated using GROOVE toolset. The results of the analysis show its high reliability.

  4. Ground System Architectures Workshop GMSEC SERVICES SUITE (GSS): an Agile Development Story

    Ly, Vuong

    2017-01-01

    The GMSEC (Goddard Mission Services Evolution Center) Services Suite (GSS) is a collection of tools and software services along with a robust customizable web-based portal that enables the user to capture, monitor, report, and analyze system-wide GMSEC data. Given our plug-and-play architecture and the needs for rapid system development, we opted to follow the Scrum Agile Methodology for software development. Being one of the first few projects to implement the Agile methodology at NASA GSFC, in this presentation we will present our approaches, tools, successes, and challenges in implementing this methodology. The GMSEC architecture provides a scalable, extensible ground and flight system for existing and future missions. GMSEC comes with a robust Application Programming Interface (GMSEC API) and a core set of Java-based GMSEC components that facilitate the development of a GMSEC-based ground system. Over the past few years, we have seen an upbeat in the number of customers who are moving from a native desktop application environment to a web based environment particularly for data monitoring and analysis. We also see a need to provide separation of the business logic from the GUI display for our Java-based components and also to consolidate all the GUI displays into one interface. This combination of separation and consolidation brings immediate value to a GMSEC-based ground system through increased ease of data access via a uniform interface, built-in security measures, centralized configuration management, and ease of feature extensibility.

  5. Open system architecture for condition based maintenance applied to a hydroelectric power plant

    Amaya, E.J.; Alvares, A.J. [University of Brasilia (UnB), DF (Brazil). Mechanical and Mechatronic Dept.], Emails: eamaya@unb.br, alvares@AlvaresTech.com; Gudwin, R.R. [State University of Campinas (UNICAMP), SP (Brazil). Computer Engineering and Industrial Automation Dept.], E-mail: gudwin@dca.fee.unicamp.br

    2009-07-01

    The hydroelectric power plant of Balbina is implementing a condition based maintenance system applying an open, modular and scalable integrated architecture to provide comprehensive solutions and support to the end users like operational and maintenance team. The system called SIMPREBAL (Predictive Maintenance System of Balbina) is advocate of open standards, in particular through collaborative research programmers. In the developing is clearly understands the need for both, industry standards and a simple to use software development tool chain, supporting the development of complex condition based maintenance systems with multiple partners. The Open System Architecture for Condition Based Maintenance (OSA-CBM) is a standard that consider seven hierarchic layers that represent a logic transition or performed data flow from the data acquisition layer, through the intermediates layers as signal processing, condition monitor, health assessment, prognostics and decision support, to arrive to the presentation layer. SIMPREBAL is being implementing as an OSA-CBM software framework and tool set that allows the creation of truly integrated, comprehensive maintenance solutions through the internet. This paper identifies specific benefits of the application of the OSA-CBM in comprehensive solutions of condition based maintenance for a hydroelectric power plant. (author)

  6. Designing flexible engineering systems utilizing embedded architecture options

    Pierce, Jeff G.

    This dissertation develops and applies an integrated framework for embedding flexibility in an engineered system architecture. Systems are constantly faced with unpredictability in the operational environment, threats from competing systems, obsolescence of technology, and general uncertainty in future system demands. Current systems engineering and risk management practices have focused almost exclusively on mitigating or preventing the negative consequences of uncertainty. This research recognizes that high uncertainty also presents an opportunity to design systems that can flexibly respond to changing requirements and capture additional value throughout the design life. There does not exist however a formalized approach to designing appropriately flexible systems. This research develops a three stage integrated flexibility framework based on the concept of architecture options embedded in the system design. Stage One defines an eight step systems engineering process to identify candidate architecture options. This process encapsulates the operational uncertainty though scenario development, traces new functional requirements to the affected design variables, and clusters the variables most sensitive to change. The resulting clusters can generate insight into the most promising regions in the architecture to embed flexibility in the form of architecture options. Stage Two develops a quantitative option valuation technique, grounded in real options theory, which is able to value embedded architecture options that exhibit variable expiration behavior. Stage Three proposes a portfolio optimization algorithm, for both discrete and continuous options, to select the optimal subset of architecture options, subject to budget and risk constraints. Finally, the feasibility, extensibility and limitations of the framework are assessed by its application to a reconnaissance satellite system development problem. Detailed technical data, performance models, and cost estimates

  7. A Secure, Scalable and Elastic Autonomic Computing Systems Paradigm: Supporting Dynamic Adaptation of Self-* Services from an Autonomic Cloud

    Abdul Jaleel

    2018-05-01

    Full Text Available Autonomic computing embeds self-management features in software systems using external feedback control loops, i.e., autonomic managers. In existing models of autonomic computing, adaptive behaviors are defined at the design time, autonomic managers are statically configured, and the running system has a fixed set of self-* capabilities. An autonomic computing design should accommodate autonomic capability growth by allowing the dynamic configuration of self-* services, but this causes security and integrity issues. A secure, scalable and elastic autonomic computing system (SSE-ACS paradigm is proposed to address the runtime inclusion of autonomic managers, ensuring secure communication between autonomic managers and managed resources. Applying the SSE-ACS concept, a layered approach for the dynamic adaptation of self-* services is presented with an online ‘Autonomic_Cloud’ working as the middleware between Autonomic Managers (offering the self-* services and Autonomic Computing System (requiring the self-* services. A stock trading and forecasting system is used for simulation purposes. The security impact of the SSE-ACS paradigm is verified by testing possible attack cases over the autonomic computing system with single and multiple autonomic managers running on the same and different machines. The common vulnerability scoring system (CVSS metric shows a decrease in the vulnerability severity score from high (8.8 for existing ACS to low (3.9 for SSE-ACS. Autonomic managers are introduced into the system at runtime from the Autonomic_Cloud to test the scalability and elasticity. With elastic AMs, the system optimizes the Central Processing Unit (CPU share resulting in an improved execution time for business logic. For computing systems requiring the continuous support of self-management services, the proposed system achieves a significant improvement in security, scalability, elasticity, autonomic efficiency, and issue resolving time

  8. Power-scalable, polarization-stable, dual-colour DFB fibre laser system for CW terahertz imaging

    Eichhorn, Finn; Pedersen, Jens Engholm; Jepsen, Peter Uhd

    Imaging with electromagnetic radiation in the terahertz (THz) range has received a large amount of attention during recent years. THz imaging systems have diverse potential application areas such as security screening, medical diagnostics and non-destructive testing. We will discuss a power......-scalable, dual-colour, polarization-maintaining distributed feedback (DFB) fibre laser system with an inherent narrow linewidth from the DFB fibre laser oscillators. The laser system can be used as source in CW THz systems employing photomixing (optical heterodyning) for generation and detection...

  9. Modeling and Verification of Dependable Electronic Power System Architecture

    Yuan, Ling; Fan, Ping; Zhang, Xiao-fang

    The electronic power system can be viewed as a system composed of a set of concurrently interacting subsystems to generate, transmit, and distribute electric power. The complex interaction among sub-systems makes the design of electronic power system complicated. Furthermore, in order to guarantee the safe generation and distribution of electronic power, the fault tolerant mechanisms are incorporated in the system design to satisfy high reliability requirements. As a result, the incorporation makes the design of such system more complicated. We propose a dependable electronic power system architecture, which can provide a generic framework to guide the development of electronic power system to ease the development complexity. In order to provide common idioms and patterns to the system *designers, we formally model the electronic power system architecture by using the PVS formal language. Based on the PVS model of this system architecture, we formally verify the fault tolerant properties of the system architecture by using the PVS theorem prover, which can guarantee that the system architecture can satisfy high reliability requirements.

  10. Joint C4ISR Architecture Planning/Analysis System (JCAPS)

    Wostbrock, Bill

    2002-01-01

    The contractor satisfactorily completed all tasks under both efforts, providing the technology and technical expertise in the development of the Joint C4ISR Architecture Planning/Analysis System (JCAPS) Database Tool...

  11. Architecture for Integrated System Health Management, Phase I

    National Aeronautics and Space Administration — Managing the health of vehicle, crew, and habitat systems is a primary function of flight controllers today. We propose to develop an architecture for automating...

  12. Implementing an Intrusion Detection System in the Mysea Architecture

    Tenhunen, Thomas

    2008-01-01

    .... The objective of this thesis is to design an intrusion detection system (IDS) architecture that permits administrators operating on MYSEA client machines to conveniently view and analyze IDS alerts from the single level networks...

  13. A Reference Architecture for Network-Centric Information Systems

    Renner, Scott; Schaefer, Ronald

    2003-01-01

    This paper presents the "C2 Enterprise Reference Architecture" (C2ERA), which is a new technical concept of operations for building information systems better suited to the Network-Centric Warfare (NCW) environment...

  14. Modular Architecture for the Deep Space Habitat Instrumentation System

    National Aeronautics and Space Administration — This project is focused on developing a continually evolving modular backbone architecture for the Deep Space Habitat (DSH) instrumentation system by integrating new...

  15. Space Telecommunications Radio System (STRS) Architecture. Part 1; Tutorial - Overview

    Handler, Louis M.; Briones, Janette C.; Mortensen, Dale J.; Reinhart, Richard C.

    2012-01-01

    Space Telecommunications Radio System (STRS) Architecture Standard provides a NASA standard for software-defined radio. STRS is being demonstrated in the Space Communications and Navigation (SCaN) Testbed formerly known as Communications, Navigation and Networking Configurable Testbed (CoNNeCT). Ground station radios communicating the SCaN testbed are also being written to comply with the STRS architecture. The STRS Architecture Tutorial Overview presents a general introduction to the STRS architecture standard developed at the NASA Glenn Research Center (GRC), addresses frequently asked questions, and clarifies methods of implementing the standard. The STRS architecture should be used as a base for many of NASA s future telecommunications technologies. The presentation will provide a basic understanding of STRS.

  16. ARCHITECTURE SOFTWARE SOLUTION TO SUPPORT AND DOCUMENT MANAGEMENT QUALITY SYSTEM

    Milan Eric

    2010-12-01

    Full Text Available One of the basis of a series of standards JUS ISO 9000 is quality system documentation. An architecture of the quality system documentation depends on the complexity of business system. An establishment of an efficient management documentation of system of quality is of a great importance for the business system, as well as in the phase of introducing the quality system and in further stages of its improvement. The study describes the architecture and capability of software solutions to support and manage the quality system documentation in accordance with the requirements of standards ISO 9001:2001, ISO 14001:2005 HACCP etc.

  17. Design of an application using microservices architecture and its deployment in the cloud

    Fernández Garcés, Lidia

    2016-01-01

    The design of modern large business application systems is moving from monolithic enterprise architectures towards microservices based architectures. These are especially well suited to run in cloud environments, because each microservice can be developed, deployed and managed individually, which allows much more fine-grained control and scalability. Based on a concrete example, this projet should build an application using a microservice architecture, showi...

  18. Scalable photoreactor for hydrogen production

    Takanabe, Kazuhiro; Shinagawa, Tatsuya

    2017-01-01

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  19. Scalable photoreactor for hydrogen production

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  20. An Energy-Efficient and Scalable Deep Learning/Inference Processor With Tetra-Parallel MIMD Architecture for Big Data Applications.

    Park, Seong-Wook; Park, Junyoung; Bong, Kyeongryeol; Shin, Dongjoo; Lee, Jinmook; Choi, Sungpill; Yoo, Hoi-Jun

    2015-12-01

    Deep Learning algorithm is widely used for various pattern recognition applications such as text recognition, object recognition and action recognition because of its best-in-class recognition accuracy compared to hand-crafted algorithm and shallow learning based algorithms. Long learning time caused by its complex structure, however, limits its usage only in high-cost servers or many-core GPU platforms so far. On the other hand, the demand on customized pattern recognition within personal devices will grow gradually as more deep learning applications will be developed. This paper presents a SoC implementation to enable deep learning applications to run with low cost platforms such as mobile or portable devices. Different from conventional works which have adopted massively-parallel architecture, this work adopts task-flexible architecture and exploits multiple parallelism to cover complex functions of convolutional deep belief network which is one of popular deep learning/inference algorithms. In this paper, we implement the most energy-efficient deep learning and inference processor for wearable system. The implemented 2.5 mm × 4.0 mm deep learning/inference processor is fabricated using 65 nm 8-metal CMOS technology for a battery-powered platform with real-time deep inference and deep learning operation. It consumes 185 mW average power, and 213.1 mW peak power at 200 MHz operating frequency and 1.2 V supply voltage. It achieves 411.3 GOPS peak performance and 1.93 TOPS/W energy efficiency, which is 2.07× higher than the state-of-the-art.

  1. PKI Scalability Issues

    Slagell, Adam J; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  2. Architecture Governance: The Importance of Architecture Governance for Achieving Operationally Responsive Ground Systems

    Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik

    2011-01-01

    Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level

  3. Experimental and numerical investigation of a scalable modular geothermal heat storage system

    Nordbeck, Johannes; Bauer, Sebastian; Beyer, Christof

    2017-04-01

    Storage of heat will play a significant role in the transition towards a reliable and renewable power supply, as it offers a way to store energy from fluctuating and weather dependent energy sources like solar or wind power and thus better meet consumer demands. The focus of this study is the simulation-based design of a heat storage system, featuring a scalable and modular setup that can be integrated with new as well as existing buildings. For this, the system can be either installed in a cellar or directly in the ground. Heat supply is by solar collectors, and heat storage is intended at temperatures up to about 90°C, which requires a verification of the methods used for numerical simulation of such systems. One module of the heat storage system consists of a helical heat exchanger in a fully water saturated, high porosity cement matrix, which represents the heat storage medium. A lab-scale storage prototype of 1 m3 volume was set up in a thermally insulated cylinder equipped with temperature and moisture sensors as well as flux meters and temperature sensors at the inlet and outlet pipes in order to experimentally analyze the performance of the storage system. Furthermore, the experimental data was used to validate an accurate and spatially detailed high-resolution 3D numerical model of heat and fluid flow, which was developed for system design optimization with respect to storage efficiency and environmental impacts. Three experiments conducted so far are reported and analyzed in this work. The first experiment, consisting of cooling of the fully loaded heat storage by heat loss across the insulation, is designed to determine the heat loss and the insulation parameters, i.e. heat conductivity and heat capacity of the insulation, via inverse modelling of the cooling period. The average cooling rate experimentally found is 1.2 °C per day. The second experiment consisted of six days of thermal loading up to a storage temperature of 60°C followed by four days

  4. An architecture for agile shop floor control systems

    Langer, Gilad; Alting, Leo

    2000-01-01

    as shop floor control. This paper presents the Holonic Multi-cell Control System (HoMuCS) architecture that allows for design and development of holonic shop floor control systems. The HoMuCS is a shop floor control system which is sometimes referred to as a manufacturing execution system...

  5. RU COOL's scalable educational focus on immersing society in the ocean through ocean observing systems

    Schofield, O.; McDonnell, J. D.; Kohut, J. T.; Glenn, S. M.

    2016-02-01

    Many regions of the ocean are exhibiting significant change, suggesting the need to develop effective focused education programs for a range of constituencies (K-12, undergraduate, and general public). We have been focused on developing a range of educational tools in a multi-pronged strategy built around using streaming data delivered through customized web services, focused undergraduate tiger teams, teacher training and video/documentary film-making. Core to the efforts is on engaging the undergraduate community by leveraging the data management tools of the U.S. Integrated Ocean Observing System (IOOS) and the education tools of the U.S. National Science Foundation's (NSF) Ocean Observing Initiative (OOI). These intuitive interactive browser-based tools reduce the barriers for student participation in sea exploration and discovery, and allowing them to become "field going" oceanographers while sitting at their desk. Those undergraduate student efforts complement efforts to improve educator and student engagement in ocean sciences through exposure to scientists and data. Through professional development and the creation of data tools, we will reduce the logistical costs of bringing ocean science to students in grades 6-16. We are providing opportunities to: 1) build capacity of scientists in communicating and engaging with diverse audiences; 2) create scalable, in-person and virtual opportunities for educators and students to engage with scientists and their research through data visualizations, data activities, educator workshops, webinars, and student research symposia. We are using a blended learning approach to promote partnerships and cross-disciplinary sharing. Finally we use data and video products to entrain public support through the development of science documentaries about the science and people who conduct it. For example Antarctic Edge is a feature length award-winning documentary about climate change that has garnered interest in movie theatres

  6. The architecture and prototype implementation of the Model Environment system

    Donchyts, G.; Treebushny, D.; Primachenko, A.; Shlyahtun, N.; Zheleznyak, M.

    2007-01-01

    An approach that simplifies software development of the model based decision support systems for environmental management has been introduced. The approach is based on definition and management of metadata and data related to computational model without losing data semantics and proposed methods of integration of the new modules into the information system and their management. An architecture of the integrated modelling system is presented. The proposed architecture has been implemented as a prototype of integrated modelling system using. NET/Gtk{#} and is currently being used to re-design European Decision Support System for Nuclear Emergency Management RODOS (http://www.rodos.fzk.de) using Java/Swing.

  7. THE ARCHITECTURE OF THE REMOTE CONTROL SYSTEM OF ROBOTICS OBJECTS

    S.V. Shavetov

    2014-03-01

    Full Text Available The paper deals with the architecture for the universal remote control system of robotics objects over the Internet global network. Control objects are assumed to be located at a considerable distance from a reference device or end-users. An overview of studies on the subject matter of remote control of technical objects is given. A structure chart of the architecture demonstrating the system usage in practice is suggested. Server software is considered that makes it possible to work with technical objects connected to the server as with a serial port and organize a stable tunnel connection between the controlled object and the end-user. The proposed architecture has been successfully tested on mobile robots Parallax Boe-Bot and Lego Mindstorms NXT. Experimental data about values of time delays are given demonstrating the effectiveness of the considered architecture.

  8. Imaging radars: System architectures and technologies

    Torre, Andrea [Thales Alenia Space Italia S.p.A., Via Saccomuro 24, 00131 Roma (Italy); Angino, Giuseppe, E-mail: giuseppe.angino@thalesaleniaspace.com [Thales Alenia Space Italia S.p.A., Via Saccomuro 24, 00131 Roma (Italy)

    2013-08-21

    The potentiality of multichannel SAR to provide wide swath and high resolution at the same time has been described in many papers in the last past years. The scope of this paper is to address some of the architectural and technological aspects related to the implementation of a multichannel receiver for a multibeam SAR, with the objective to provide some solutions for different configurations with increased complexity. A further point is the exploitation of the multichannel configuration for the implementation of very high resolution modes.

  9. Architectural transformations in network services and distributed systems

    Luntovskyy, Andriy

    2017-01-01

    With the given work we decided to help not only the readers but ourselves, as the professionals who actively involved in the networking branch, with understanding the trends that have developed in recent two decades in distributed systems and networks. Important architecture transformations of distributed systems have been examined. The examples of new architectural solutions are discussed. Content Periodization of service development Energy efficiency Architectural transformations in Distributed Systems Clustering and Parallel Computing, performance models Cloud Computing, RAICs, Virtualization, SDN Smart Grid, Internet of Things, Fog Computing Mobile Communication from LTE to 5G, DIDO, SAT-based systems Data Security Guaranteeing Distributed Systems Target Groups Students in EE and IT of universities and (dual) technical high schools Graduated engineers as well as teaching staff About the Authors Andriy Luntovskyy provides classes on networks, mobile communication, software technology, distributed systems, ...

  10. Central system of Interlock of ITER, high integrity architecture

    Prieto, I.; Martinez, G.; Lopez, C.

    2014-01-01

    The CIS (Central Interlock System), along with the CODAC system and CSS (Central Safety System), form the central I and C systems of ITER. The CIS is responsible for implementing the core functions of protection (Central Interlock Functions) through different systems of plant (Plant Systems) within the overall strategy of investment protection for ITER. IBERDROLA supports engineering to define and develop the control architecture of CIS according to the stringent requirements of integrity, availability and response time. For functions with response times of the order of half a second is selected PLC High availability of industrial range. However, due to the nature of the machine itself, certain functions must be able to act under the millisecond, so it has had to develop a solution based on FPGA (Field Programmable Gate Array) capable of meeting the requirements architecture. In this article CIS architecture is described, as well as the process for the development and validation of the selected platforms. (Author)

  11. The Double-System Architecture for Trusted OS

    Zhao, Yong; Li, Yu; Zhan, Jing

    With the development of computer science and technology, current secure operating systems failed to respond to many new security challenges. Trusted operating system (TOS) is proposed to try to solve these problems. However, there are no mature, unified architectures for the TOS yet, since most of them cannot make clear of the relationship between security mechanism and the trusted mechanism. Therefore, this paper proposes a double-system architecture (DSA) for the TOS to solve the problem. The DSA is composed of the Trusted System (TS) and the Security System (SS). We constructed the TS by establishing a trusted environment and realized related SS. Furthermore, we proposed the Trusted Information Channel (TIC) to protect the information flow between TS and SS. In a word, the double system architecture we proposed can provide reliable protection for the OS through the SS with the supports provided by the TS.

  12. Dynamic logic architecture based on piecewise-linear systems

    Peng Haipeng; Liu Fei; Li Lixiang; Yang Yixian; Wang Xue

    2010-01-01

    This Letter explores piecewise-linear systems to construct dynamic logic architecture. The proposed schemes can discriminate the two input signals and obtain 16 kinds of logic operations by different combinations of parameters and conditions for determining the output. Each logic cell performs more flexibly, that makes it possible to achieve complex logic operations more simply and construct computing architecture with less logic cells. We also analyze the various performances of our schemes under different conditions and the characteristics of these schemes.

  13. An integrated architecture for the ITER RH control system

    Hamilton, David Thomas; Tesini, Alessandro

    2012-01-01

    Highlights: ► Control system architecture integrating ITER remote handling equipment systems. ► Standard control system architecture for remote handling equipment systems. ► Research and development activities to validate control system architecture. ► Standardization studies to select standard parts for control system architecture. - Abstract: The ITER remote handling (RH) system has been divided into 7 major equipment system procurements that deliver complete systems (operator interfaces, equipment controllers, and equipment) according to task oriented functional specifications. Each equipment system itself is an assembly of transporters, power manipulators, telemanipulators, vehicular systems, cameras, and tooling with a need for controllers and operator interfaces. From an operational perspective, the ITER RH systems are bound together by common control rooms, operations team, and maintenance team; and will need to achieve, to a varying degree, synchronization of operations, co-operation on tasks, hand-over of components, and sharing of data and resources. The separately procured RH systems must, therefore, be integrated to form a unified RH system for operation from the RH control rooms. The RH system will contain a heterogeneous mix of specially developed RH systems and off-the-shelf RH equipment and parts. The ITER Organization approach is to define a control system architecture that supports interoperable heterogeneous modules, and to specify a standard set of modules for each system to implement within this architecture. Compatibility with standard parts for selected modules is required to limit the complexity for operations and maintenance. A key requirement for integrating the control system modules is interoperability, and no module should have dependencies on the implementation details of other modules. The RH system is one of the ITER Plant systems that are integrated and coordinated through the hierarchical structure of the ITER CODAC system

  14. Architectural development of an advanced EVA Electronic System

    Lavelle, Joseph

    1992-01-01

    An advanced electronic system for future EVA missions (including zero gravity, the lunar surface, and the surface of Mars) is under research and development within the Advanced Life Support Division at NASA Ames Research Center. As a first step in the development, an optimum system architecture has been derived from an analysis of the projected requirements for these missions. The open, modular architecture centers around a distributed multiprocessing concept where the major subsystems independently process their own I/O functions and communicate over a common bus. Supervision and coordination of the subsystems is handled by an embedded real-time operating system kernel employing multitasking software techniques. A discussion of how the architecture most efficiently meets the electronic system functional requirements, maximizes flexibility for future development and mission applications, and enhances the reliability and serviceability of the system in these remote, hostile environments is included.

  15. System design in an evolving system-of-systems architecture and concept of operations

    Rovekamp, Roger N., Jr.

    Proposals for space exploration architectures have increased in complexity and scope. Constituent systems (e.g., rovers, habitats, in-situ resource utilization facilities, transfer vehicles, etc) must meet the needs of these architectures by performing in multiple operational environments and across multiple phases of the architecture's evolution. This thesis proposes an approach for using system-of-systems engineering principles in conjunction with system design methods (e.g., Multi-objective optimization, genetic algorithms, etc) to create system design options that perform effectively at both the system and system-of-systems levels, across multiple concepts of operations, and over multiple architectural phases. The framework is presented by way of an application problem that investigates the design of power systems within a power sharing architecture for use in a human Lunar Surface Exploration Campaign. A computer model has been developed that uses candidate power grid distribution solutions for a notional lunar base. The agent-based model utilizes virtual control agents to manage the interactions of various exploration and infrastructure agents. The philosophy behind the model is based both on lunar power supply strategies proposed in literature, as well as on the author's own approaches for power distribution strategies of future lunar bases. In addition to proposing a framework for system design, further implications of system-of-systems engineering principles are briefly explored, specifically as they relate to producing more robust cross-cultural system-of-systems architecture solutions.

  16. The CEBAF [Continuous Electron Beam Accelerator Facility] control system architecture

    Bork, R.

    1987-01-01

    The focus of this paper is on CEBAF's computer control system. This control system will utilize computers in a distributed, networked configuration. The architecture, networking and operating system of the computers, and preliminary performance data are presented. We will also discuss the design of the operator consoles and the interfacing between the computers and CEBAF's instrumentation and operating equipment

  17. Architectural conceptual definition of the CAREM-25 reactor's control system

    Perez, J.C.; Santome, D.; Drexler, J.; Escudero, S.

    1990-01-01

    This work presents the conceptual definition of the CAREM 25 reactor's digital and monitoring control system structure. The requirements of the system are analyzed and different implementation alternatives are studied where possible basic architectures of the system and its topology are considered and evaluated. (Author) [es

  18. Disruptive Logic Architectures and Technologies From Device to System Level

    Gaillardon, Pierre-Emmanuel; Clermidy, Fabien

    2012-01-01

    This book discusses the opportunities offered by disruptive technologies to overcome the economical and physical limits currently faced by the electronics industry. It provides a new methodology for the fast evaluation of an emerging technology from an architectural perspective and discusses the implications from simple circuits to complex architectures. Several technologies are discussed, ranging from 3-D integration of devices (Phase Change Memories, Monolithic 3-D, Vertical NanoWires-based transistors) to dense 2-D arrangements (Double-Gate Carbon Nanotubes, Sublithographic Nanowires, Lithographic Crossbar arrangements). Novel architectural organizations, as well as the associated tools, are presented in order to explore this freshly opened design space. Describes a novel architectural organization for future reconfigurable systems; Includes a complete benchmarking toolflow for emerging technologies; Generalizes the description of reconfigurable circuits in terms of hierarchical levels; Assesses disruptive...

  19. A compact, coherent light source system architecture

    Biedron, S. G.; Dattoli, G.; DiPalma, E.; Einstein, J.; Milton, S. V.; Petrillo, V.; Rau, J. V.; Sabia, E.; Spassovsky, I. P.; van der Slot, P. J. M.

    2016-09-01

    Our team has been examining several architectures for short-wavelength, coherent light sources. We are presently exploring the use and role of advanced, high-peak power lasers for both accelerating the electrons and generating a compact light source with the same laser. Our overall goal is to devise light sources that are more accessible by industry and in smaller laboratory settings. Although we cannot and do not want to compete directly with sources such as third-generation light sources or that of national-laboratory-based free-electron lasers, we have several interesting schemes that could bring useful and more coherent, short-wavelength light source to more researchers. Here, we present and discuss several results of recent simulations and our future steps for such dissemination.

  20. Architectural considerations in the certification of modular systems

    Bate, Iain; Kelly, Tim

    2003-09-01

    Modular system architectures, such as integrated modular avionics (IMA) in the aerospace sector, offer potential benefits of improved flexibility in function allocation, reduced development costs and improved maintainability. However, they require a new certification approach. The traditional approach to certification is to prepare monolithic safety cases as bespoke developments for a specific system in a fixed configuration. However, this nullifies the benefits of flexibility and reduced rework claimed of IMA-based systems and will necessitate the development of new safety cases for all possible (current and future) configurations of the architecture. This paper discusses a modular approach to safety case construction, whereby the safety case is partitioned into separable arguments of safety corresponding with the components of the system architecture. Such an approach relies upon properties of the IMA system architecture (such as segregation and location independence) having been established. The paper describes how such properties can be assessed to show that they are met and trade-offs performed during architecture definition reusing information and techniques from the safety argument process.

  1. Communication Architecture in Mixed-Reality Simulations of Unmanned Systems.

    Selecký, Martin; Faigl, Jan; Rollo, Milan

    2018-03-14

    Verification of the correct functionality of multi-vehicle systems in high-fidelity scenarios is required before any deployment of such a complex system, e.g., in missions of remote sensing or in mobile sensor networks. Mixed-reality simulations where both virtual and physical entities can coexist and interact have been shown to be beneficial for development, testing, and verification of such systems. This paper deals with the problems of designing a certain communication subsystem for such highly desirable realistic simulations. Requirements of this communication subsystem, including proper addressing, transparent routing, visibility modeling, or message management, are specified prior to designing an appropriate solution. Then, a suitable architecture of this communication subsystem is proposed together with solutions to the challenges that arise when simultaneous virtual and physical message transmissions occur. The proposed architecture can be utilized as a high-fidelity network simulator for vehicular systems with implicit mobility models that are given by real trajectories of the vehicles. The architecture has been utilized within multiple projects dealing with the development and practical deployment of multi-UAV systems, which support the architecture's viability and advantages. The provided experimental results show the achieved similarity of the communication characteristics of the fully deployed hardware setup to the setup utilizing the proposed mixed-reality architecture.

  2. Architectural considerations in the certification of modular systems

    Bate, Iain; Kelly, Tim

    2003-01-01

    Modular system architectures, such as integrated modular avionics (IMA) in the aerospace sector, offer potential benefits of improved flexibility in function allocation, reduced development costs and improved maintainability. However, they require a new certification approach. The traditional approach to certification is to prepare monolithic safety cases as bespoke developments for a specific system in a fixed configuration. However, this nullifies the benefits of flexibility and reduced rework claimed of IMA-based systems and will necessitate the development of new safety cases for all possible (current and future) configurations of the architecture. This paper discusses a modular approach to safety case construction, whereby the safety case is partitioned into separable arguments of safety corresponding with the components of the system architecture. Such an approach relies upon properties of the IMA system architecture (such as segregation and location independence) having been established. The paper describes how such properties can be assessed to show that they are met and trade-offs performed during architecture definition reusing information and techniques from the safety argument process

  3. Implications of Services-Oriented Architecture and Open Architecture Composable Systems on the Acquisition Organizations and Processes

    Brummett, Cory S; Finney, Benjamin H

    2008-01-01

    .... Many systems, systems-of-systems and families of systems with different software architectures are acquired and often have difficulty operating together, which causes delays, increases costs, and limits re-use...

  4. Performance evaluation of microservices architectures using containers

    Amaral, Marcelo; Polo, Jordà; Carrera Pérez, David; Mohomed, Iqbal; Unuvar, Merve; Steinder, Malgorzata

    2015-01-01

    Microservices architecture has started a new trend for application development for a number of reasons: (1) to reduce complexity by using tiny services; (2) to scale, remove and deploy parts of the system easily; (3) to improve flexibility to use different frameworks and tools; (4) to increase the overall scalability; and (5) to improve the resilience of the system. Containers have empowered the usage of microservices architectures by being lightweight, providing fast start-up times, and havi...

  5. Technology System Architecture for Web–Based Education

    A. Canales–Cruz

    2009-04-01

    Full Text Available In this paper a new architecture for development of Web–Based Education systems is presented. The se systems are centered in the learner and adapted to their personals needs in intelligent form. The architecture is based on the IEEE 1484 LTSA (Learning Technology System Architecture specification and it assembles to software development and instructional design patterns. On the one hand, the software development pattern is supported under a Multi–Agents System, it employs the methods and technical of the Domain Engineering for development of IRLCOO (Intelligent Reusable Learning Components Object Oriented. IRLCOO are a special type of Sharable Content Object according to SCORM (Sharable Content Object Reusable Model. On the other hand, the instructional design pattern incorporates a mental model as the Conceptual Maps to transmit, build and generate appropriate knowledge to this educational environment type.

  6. Architecture Level Safety Analyses for Safety-Critical Systems

    K. S. Kushal

    2017-01-01

    Full Text Available The dependency of complex embedded Safety-Critical Systems across Avionics and Aerospace domains on their underlying software and hardware components has gradually increased with progression in time. Such application domain systems are developed based on a complex integrated architecture, which is modular in nature. Engineering practices assured with system safety standards to manage the failure, faulty, and unsafe operational conditions are very much necessary. System safety analyses involve the analysis of complex software architecture of the system, a major aspect in leading to fatal consequences in the behaviour of Safety-Critical Systems, and provide high reliability and dependability factors during their development. In this paper, we propose an architecture fault modeling and the safety analyses approach that will aid in identifying and eliminating the design flaws. The formal foundations of SAE Architecture Analysis & Design Language (AADL augmented with the Error Model Annex (EMV are discussed. The fault propagation, failure behaviour, and the composite behaviour of the design flaws/failures are considered for architecture safety analysis. The illustration of the proposed approach is validated by implementing the Speed Control Unit of Power-Boat Autopilot (PBA system. The Error Model Annex (EMV is guided with the pattern of consideration and inclusion of probable failure scenarios and propagation of fault conditions in the Speed Control Unit of Power-Boat Autopilot (PBA. This helps in validating the system architecture with the detection of the error event in the model and its impact in the operational environment. This also provides an insight of the certification impact that these exceptional conditions pose at various criticality levels and design assurance levels and its implications in verifying and validating the designs.

  7. Effects of various event building techniques on data acquisition system architectures

    Barsotti, E.; Booth, A.; Bowden, M.

    1990-04-01

    The preliminary specifications for various new detectors throughout the world including those at the Superconducting Super Collider (SSC) already make it clear that existing event building techniques will be inadequate for the high trigger and data rates anticipated for these detectors. In the world of high-energy physics many approaches have been taken to solving the problem of reading out data from a whole detector and presenting a complete event to the physicist, while simultaneously keeping deadtime to a minimum. This paper includes a review of multiprocessor and telecommunications interconnection networks and how these networks relate to event building in general, illustrating advantages of the various approaches. It presents a more detailed study of recent research into new event building techniques which incorporate much greater parallelism to better accommodate high data rates. The future in areas such as front-end electronics architectures, high speed data links, event building and online processor arrays is also examined. Finally, details of a scalable parallel data acquisition system architecture being developed at Fermilab are given. 35 refs., 31 figs., 1 tab

  8. A Scalable and Modular Dome Illumination System for Scientific Microphotography on a Budget.

    Ricardo Kawada

    Full Text Available A scalable and modular LED illumination dome for microscopic scientific photography is described and illustrated, and methods for constructing such a dome are detailed. Dome illumination for insect specimens has become standard practice across the field of insect systematics, but many dome designs remain expensive and inflexible with respect to new LED technology. Further, a one-size-fits-all dome cannot accommodate the large breadth of insect size encountered in nature, forcing the photographer to adapt, in some cases, to a less than ideal dome design. The dome described here is scalable, as it is based on a isodecahedron, and the template for the dome is available as a downloaded file from the internet that can be printed on any printer, on the photographer's choice of media. As a result, a photographer can afford, using this design, to produce a series of domes of various sizes and materials, and LED ring lights of various sizes and color temperatures, depending on the need.

  9. Architecture for Multi-Technology Real-Time Location Systems

    Rodas, Javier; Barral, Valentín; Escudero, Carlos J.

    2013-01-01

    The rising popularity of location-based services has prompted considerable research in the field of indoor location systems. Since there is no single technology to support these systems, it is necessary to consider the fusion of the information coming from heterogeneous sensors. This paper presents a software architecture designed for a hybrid location system where we can merge information from multiple sensor technologies. The architecture was designed to be used by different kinds of actors independently and with mutual transparency: hardware administrators, algorithm developers and user applications. The paper presents the architecture design, work-flow, case study examples and some results to show how different technologies can be exploited to obtain a good estimation of a target position. PMID:23435050

  10. Architecture for multi-technology real-time location systems.

    Rodas, Javier; Barral, Valentín; Escudero, Carlos J

    2013-02-07

    The rising popularity of location-based services has prompted considerable research in the field of indoor location systems. Since there is no single technology to support these systems, it is necessary to consider the fusion of the information coming from heterogeneous sensors. This paper presents a software architecture designed for a hybrid location system where we can merge information from multiple sensor technologies. The architecture was designed to be used by different kinds of actors independently and with mutual transparency: hardware administrators, algorithm developers and user applications. The paper presents the architecture design, work-flow, case study examples and some results to show how different technologies can be exploited to obtain a good estimation of a target position.

  11. ELISA, a demonstrator environment for information systems architecture design

    Panem, Chantal

    1994-01-01

    This paper describes an approach of reusability of software engineering technology in the area of ground space system design. System engineers have lots of needs similar to software developers: sharing of a common data base, capitalization of knowledge, definition of a common design process, communication between different technical domains. Moreover system designers need to simulate dynamically their system as early as possible. Software development environments, methods and tools now become operational and widely used. Their architecture is based on a unique object base, a set of common management services and they host a family of tools for each life cycle activity. In late '92, CNES decided to develop a demonstrative software environment supporting some system activities. The design of ground space data processing systems was chosen as the application domain. ELISA (Integrated Software Environment for Architectures Specification) was specified as a 'demonstrator', i.e. a sufficient basis for demonstrations, evaluation and future operational enhancements. A process with three phases was implemented: system requirements definition, design of system architectures models, and selection of physical architectures. Each phase is composed of several activities that can be performed in parallel, with the provision of Commercial Off the Shelves Tools. ELISA has been delivered to CNES in January 94, currently used for demonstrations and evaluations on real projects (e.g. SPOT4 Satellite Control Center). It is on the way of new evolutions.

  12. A Security Architecture for Fault-Tolerant Systems

    1993-06-03

    aspect of our effort to achieve better performance is integrating the system into microkernel -based operating systems. 4 Summary and discussion In...135-171, June 1983. [vRBC+92] R. van Renesse, K. Birman, R. Cooper, B. Glade, and P. Stephenson. Reliable multicast between microkernels . In...Proceedings of the USENIX Microkernels and Other Kernel Architectures Workshop, April 1992. 29

  13. System architecture of communication infrastructures for PPDR organisations

    Müller, Wilmuth

    2017-04-01

    The growing number of events affecting public safety and security (PS and S) on a regional scale with potential to grow up to large scale cross border disasters puts an increased pressure on organizations responsible for PS and S. In order to respond timely and in an adequate manner to such events Public Protection and Disaster Relief (PPDR) organizations need to cooperate, align their procedures and activities, share the needed information and be interoperable. Existing PPDR/PMR technologies do not provide broadband capability, which is a major limitation in supporting new services hence new information flows and currently they have no successor. There is also no known standard that addresses interoperability of these technologies. The paper at hands provides an approach to tackle the above mentioned aspects by defining an Enterprise Architecture (EA) of PPDR organizations and a System Architecture of next generation PPDR communication networks for a variety of applications and services on broadband networks, including the ability of inter-system, inter-agency and cross-border operations. The Open Safety and Security Architecture Framework (OSSAF) provides a framework and approach to coordinate the perspectives of different types of stakeholders within a PS and S organization. It aims at bridging the silos in the chain of commands and on leveraging interoperability between PPDR organizations. The framework incorporates concepts of several mature enterprise architecture frameworks including the NATO Architecture Framework (NAF). However, OSSAF is not providing details on how NAF should be used for describing the OSSAF perspectives and views. In this contribution a mapping of the NAF elements to the OSSAF views is provided. Based on this mapping, an EA of PPDR organizations with a focus on communication infrastructure related capabilities is presented. Following the capability modeling, a system architecture for secure and interoperable communication infrastructures

  14. Scalable fractionation of iron oxide nanoparticles using a CO{sub 2} gas-expanded liquid system

    Vengsarkar, Pranav S.; Xu, Rui; Roberts, Christopher B., E-mail: croberts@eng.auburn.edu [Auburn University, Department of Chemical Engineering (United States)

    2015-10-15

    Iron oxide nanoparticles exhibit highly size-dependent physicochemical properties that are important in applications such as catalysis and environmental remediation. In order for these size-dependent properties to be effectively harnessed for industrial applications scalable and cost-effective techniques for size-controlled synthesis or size separation must be developed. The synthesis of monodisperse iron oxide nanoparticles can be a prohibitively expensive process on a large scale. An alternative involves the use of inexpensive synthesis procedures followed by a size-selective processing technique. While there are many techniques available to fractionate nanoparticles, many of the techniques are unable to efficiently fractionate iron oxide nanoparticles in a scalable and inexpensive manner. A scalable apparatus capable of fractionating large quantities of iron oxide nanoparticles into distinct fractions of different sizes and size distributions has been developed. Polydisperse iron oxide nanoparticles (2–20 nm) coated with oleic acid used in this study were synthesized using a simple and inexpensive version of the popular coprecipitation technique. This apparatus uses hexane as a CO{sub 2} gas-expanded liquid to controllably precipitate nanoparticles inside a 1L high-pressure reactor. This paper demonstrates the operation of this new apparatus and for the first time shows the successful fractionation results on a system of metal oxide nanoparticles, with initial nanoparticle concentrations in the gram-scale. The analysis of the obtained fractions was performed using transmission electron microscopy and dynamic light scattering. The use of this simple apparatus provides a pathway to separate large quantities of iron oxide nanoparticles based upon their size for use in various industrial applications.

  15. Rapid and Scalable Characterization of CRISPR Technologies Using an E. coli Cell-Free Transcription-Translation System.

    Marshall, Ryan; Maxwell, Colin S; Collins, Scott P; Jacobsen, Thomas; Luo, Michelle L; Begemann, Matthew B; Gray, Benjamin N; January, Emma; Singer, Anna; He, Yonghua; Beisel, Chase L; Noireaux, Vincent

    2018-01-04

    CRISPR-Cas systems offer versatile technologies for genome engineering, yet their implementation has been outpaced by ongoing discoveries of new Cas nucleases and anti-CRISPR proteins. Here, we present the use of E. coli cell-free transcription-translation (TXTL) systems to vastly improve the speed and scalability of CRISPR characterization and validation. TXTL can express active CRISPR machinery from added plasmids and linear DNA, and TXTL can output quantitative dynamics of DNA cleavage and gene repression-all without protein purification or live cells. We used TXTL to measure the dynamics of DNA cleavage and gene repression for single- and multi-effector CRISPR nucleases, predict gene repression strength in E. coli, determine the specificities of 24 diverse anti-CRISPR proteins, and develop a fast and scalable screen for protospacer-adjacent motifs that was successfully applied to five uncharacterized Cpf1 nucleases. These examples underscore how TXTL can facilitate the characterization and application of CRISPR technologies across their many uses. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Architecture of an acquisition system-multiprocessors

    Postec, H.

    1987-07-01

    To follow the huge increasing of concerned parameters in nuclear detection systems, acquisition systems become bigger and have to present very good rapidity performance. At Ganil, four detection systems have been set in Nautilus reaction chamber, that lead to experiment configurations with 700 parameters to process. In front of present acquisition system limitation, a device more relevant to lecture of a large number of channels show off necessary. Functionalities already operating in other systems and hardware already used have been chosen; specific technical solutions were aldo developed to use the most recent techniques and to take in account the four detection system structure of the device [fr

  17. Biomolecular System Design: Architecture, Synthesis, and Simulation

    Chiang , Katherine

    2015-01-01

    The advancements in systems and synthetic biology have been broadening the range of realizable systems with increasing complexity both in vitro and in vivo. Systems for digital logic operations, signal processing, analog computation, program flow control, as well as those composed of different functions – for example an on-site diagnostic system based on multiple biomarker measurements and signal processing – have been realized successfully. However, the efforts to date tend to tackle each de...

  18. Architecture of the modern accelerator control system

    Samardzic, B.; Drndarevic, V.

    2000-01-01

    Well defined concept of the system and construction plan are the important conditions for the successful realization of the accelerator control system. In this paper the modern concept of accelerator control system as well as guidelines for its efficient development have been presented. Described concept could be applied for the design of control systems for other types of facilities for experimental physics and for industrial process control. (author)

  19. Supervisory Control System Architecture for Advanced Small Modular Reactors

    Cetiner, Sacit M [ORNL; Cole, Daniel L [University of Pittsburgh; Fugate, David L [ORNL; Kisner, Roger A [ORNL; Melin, Alexander M [ORNL; Muhlheim, Michael David [ORNL; Rao, Nageswara S [ORNL; Wood, Richard Thomas [ORNL

    2013-08-01

    This technical report was generated as a product of the Supervisory Control for Multi-Modular SMR Plants project within the Instrumentation, Control and Human-Machine Interface technology area under the Advanced Small Modular Reactor (SMR) Research and Development Program of the U.S. Department of Energy. The report documents the definition of strategies, functional elements, and the structural architecture of a supervisory control system for multi-modular advanced SMR (AdvSMR) plants. This research activity advances the state-of-the art by incorporating decision making into the supervisory control system architectural layers through the introduction of a tiered-plant system approach. The report provides a brief history of hierarchical functional architectures and the current state-of-the-art, describes a reference AdvSMR to show the dependencies between systems, presents a hierarchical structure for supervisory control, indicates the importance of understanding trip setpoints, applies a new theoretic approach for comparing architectures, identifies cyber security controls that should be addressed early in system design, and describes ongoing work to develop system requirements and hardware/software configurations.

  20. The architecture of the management system of complex steganographic information

    Evsutin, O. O.; Meshcheryakov, R. V.; Kozlova, A. S.; Solovyev, T. M.

    2017-01-01

    The aim of the study is to create a wide area information system that allows one to control processes of generation, embedding, extraction, and detection of steganographic information. In this paper, the following problems are considered: the definition of the system scope and the development of its architecture. For creation of algorithmic maintenance of the system, classic methods of steganography are used to embed information. Methods of mathematical statistics and computational intelligence are used to identify the embedded information. The main result of the paper is the development of the architecture of the management system of complex steganographic information. The suggested architecture utilizes cloud technology in order to provide service using the web-service via the Internet. It is meant to provide streams of multimedia data processing that are streams with many sources of different types. The information system, built in accordance with the proposed architecture, will be used in the following areas: hidden transfer of documents protected by medical secrecy in telemedicine systems; copyright protection of online content in public networks; prevention of information leakage caused by insiders.

  1. Scalable control program for multiprecursor flow-type atomic layer deposition system

    Selvaraj, Sathees Kannan [Department of Chemical Engineering, University of Illinois at Chicago, Chicago, Illinois 60607 (United States); Takoudis, Christos G., E-mail: takoudis@uic.edu [Department of Chemical Engineering, University of Illinois at Chicago, Chicago, Illinois 60607 and Department of Bioengineering, University of Illinois at Chicago, Chicago, Illinois 60607 (United States)

    2015-01-01

    The authors report the development and implementation of a scalable control program to control flow type atomic layer deposition (ALD) reactor with multiple precursor delivery lines. The program logic is written and tested in LABVIEW environment to control ALD reactor with four precursor delivery lines to deposit up to four layers of different materials in cyclic manner. The programming logic is conceived such that to facilitate scale up for depositing more layers with multiple precursors and scale down for using single layer with any one precursor in the ALD reactor. The program takes precursor and oxidizer exposure and purging times as input and controls the sequential opening and closing of the valves to facilitate the complex ALD process in cyclic manner. The program could be used to deposit materials from any single line or in tandem with other lines in any combination and in any sequence.

  2. Space Based Radar-System Architecture Design and Optimization for a Space Based Replacement to AWACS

    Wickert, Douglas

    1997-01-01

    Through a process of system architecture design, system cost modeling, and system architecture optimization, we assess the feasibility of performing the next generation Airborne Warning and Control System (AWACS...

  3. Grid architecture for future distribution system — A cyber-physical system perspective

    Li, Chendan; Dragicevic, Tomislav; Leonardo Diaz Aldana, Nelson

    2017-01-01

    system need more insight into the system architecture of the grid. In this paper, in light of the start-of-the-art control strategies for microgrids which rely on power electronics systems, a grid architecture model for future distribution system is proposed based on microgrid clusters. Both the physical...

  4. Communication System Architecture for Planetary Exploration

    Braham, Stephen P.; Alena, Richard; Gilbaugh, Bruce; Glass, Brian; Norvig, Peter (Technical Monitor)

    2001-01-01

    Future human missions to Mars will require effective communications supporting exploration activities and scientific field data collection. Constraints on cost, size, weight and power consumption for all communications equipment make optimization of these systems very important. These information and communication systems connect people and systems together into coherent teams performing the difficult and hazardous tasks inherent in planetary exploration. The communication network supporting vehicle telemetry data, mission operations, and scientific collaboration must have excellent reliability, and flexibility.

  5. Communication Architecture in Mixed-Reality Simulations of Unmanned Systems

    Martin Selecký

    2018-03-01

    Full Text Available Verification of the correct functionality of multi-vehicle systems in high-fidelity scenarios is required before any deployment of such a complex system, e.g., in missions of remote sensing or in mobile sensor networks. Mixed-reality simulations where both virtual and physical entities can coexist and interact have been shown to be beneficial for development, testing, and verification of such systems. This paper deals with the problems of designing a certain communication subsystem for such highly desirable realistic simulations. Requirements of this communication subsystem, including proper addressing, transparent routing, visibility modeling, or message management, are specified prior to designing an appropriate solution. Then, a suitable architecture of this communication subsystem is proposed together with solutions to the challenges that arise when simultaneous virtual and physical message transmissions occur. The proposed architecture can be utilized as a high-fidelity network simulator for vehicular systems with implicit mobility models that are given by real trajectories of the vehicles. The architecture has been utilized within multiple projects dealing with the development and practical deployment of multi-UAV systems, which support the architecture’s viability and advantages. The provided experimental results show the achieved similarity of the communication characteristics of the fully deployed hardware setup to the setup utilizing the proposed mixed-reality architecture.

  6. Reconfigurable radio systems network architectures and standards

    Iacobucci, Maria Stella

    2013-01-01

    This timely book provides a standards-based view of the development, evolution, techniques and potential future scenarios for the deployment of reconfigurable radio systems.  After an introduction to radiomobile and radio systems deployed in the access network, the book describes cognitive radio concepts and capabilities, which are the basis for reconfigurable radio systems.  The self-organizing network features introduced in 3GPP standards are discussed and IEEE 802.22, the first standard based on cognitive radio, is described. Then the ETSI reconfigurable radio systems functional ar

  7. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    Jaschob Daniel

    2012-07-01

    Full Text Available Abstract Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud” and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  8. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  9. A concept of distributed architecture for maintenance robot systems

    Asama, Hajime

    1990-01-01

    Aiming at development of a robot system for maintenance tasks in nuclear power plants, a concept of distributed architecture for autonomous robot systems is discussed. At first, based on investigation of maintenance tasks, requirements for maintenance robots are introduced, and structures to realize multi-functions are discussed. Then, as a new design strategy of maintenance robot system, an autonomous and decentralized robot systems is proposed, which is composed of multiple robots, computers, and equipments, and concept of ACTRESS (ACTor-based Robots and Equipments Synthetic System) including communication framework between robotic components is designed. Finally, as a model of ACTRESS, a experimental system is developed, which deals with object-pushing tasks by two micromice and an environment modeler with communicating with each other. Both of parallel independent motion and cooperative motion based on communication is reconciled, and the efficiency of the distributed architecture is verified. (author)

  10. Lessons Learned while Exploring Cloud-Native Architectures for NASA EOSDIS Applications and Systems

    Pilone, D.

    2016-12-01

    As new, high data rate missions begin collecting data, the NASA's Earth Observing System Data and Information System (EOSDIS) archive is projected to grow roughly 20x to over 300PBs by 2025. To prepare for the dramatic increase in data and enable broad scientific inquiry into larger time series and datasets, NASA has been exploring the impact of applying cloud technologies throughout EOSDIS. In this talk we will provide an overview of NASA's prototyping and lessons learned in applying cloud architectures to: Highly scalable and extensible ingest and archive of EOSDIS data Going "all-in" on cloud based application architectures including "serverless" data processing pipelines and evaluating approaches to vendor-lock in Rethinking data distribution and approaches to analysis in a cloud environment Incorporating and enforcing security controls while minimizing the barrier for research efforts to deploy to NASA compliant, operational environments. NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a multi-petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 6000 data products ranging from various types of science disciplines. EOSDIS has continually evolved to improve the discoverability, accessibility, and usability of high-impact NASA data spanning the multi-petabyte-scale archive of Earth science data products.

  11. A programmable display layer for virtual reality system architectures

    Smit, F.A.; Liere, van R.; Fröhlich, B.

    2010-01-01

    Display systems typically operate at a minimum rate of 60 Hz. However, existing VR-architectures generally produce application updates at a lower rate. Consequently, the display is not updated by the application every display frame. This causes a number of undesirable perceptual artifacts. We

  12. Advances in architectural concepts to support distributed systems design

    Ferreira Pires, Luis; Vissers, C.A.; van Sinderen, Marten J.

    1993-01-01

    This paper presents and discusses some architectural concepts for distributed systems design. These concepts are derived from an analysis of limitations of some currently available standard design languages. We conclude that language design should be based upon the careful consideration of

  13. Heterogeneous System Architectures from APUs to discrete GPUs

    CERN. Geneva

    2013-01-01

    We will present the Heterogeneous Systems Architectures that new AMD processors are bringing with the new GCN based GPUs and the new APUs. We will show how together they represent a huge step forward for programming flexibility and performance efficiently for Compute.

  14. Open Architecture Standards and Information Systems (OASIS II ...

    Open Architecture Standards and Information Systems (OASIS II) - Developing Capacity, Sharing Knowledge and Good Principles Across eHealth in Africa. Health care across much of the African continent is hampered by meager resources and a growing burden of disease, with HIV/AIDS, tuberculosis (TB) and malaria ...

  15. The MGS Avionics System Architecture: Exploring the Limits of Inheritance

    Bunker, R.

    1994-01-01

    Mars Global Surveyor (MGS) avionics system architecture comprises much of the electronics on board the spacecraft: electrical power, attitude and articulation control, command and data handling, telecommunications, and flight software. Schedule and cost constraints dictated a mix of new and inherited designs, especially hardware upgrades based on findings of the Mars Observer failure review boards.

  16. Towards an architectural design system based on generic representations

    Pranovich, S.; Achten, H.H.; Wijk, van J.J.; Gero, J.S.

    2002-01-01

    Computer Aided Architectural Design systems offer a broad scope of drawing and modeling techniques for the designer. Nevertheless, they offer limited support for the early phases of the design process. One reason is that the level of abstraction is too low: the user can define walls and such in

  17. The Design of a System Architecture for Mobile Multimedia Computers

    Havinga, Paul J.M.

    2000-01-01

    This chapter discusses the system architecture of a portable computer, called Mobile Digital Companion, which provides support for handling multimedia applications energy efficiently. Because battery life is limited and battery weight is an important factor for the size and the weight of the Mobile

  18. Architecture of personal healthcare information system in ubiquitous healthcare

    Bhardwaj, S.; Sain, M.; Lee, H.-J.; Chung, W.Y.; Slezak, D.; et al., xx

    2009-01-01

    Due to recent development in Ubiquitous Healthcare now it’s time to build such application which can work independently and with less interference of Physician. In this paper we are try to build the whole architecture of personal Healthcare information system for ubiquitous healthcare which also

  19. PC-Cluster based Storage System Architecture for Cloud Storage

    Yee, Tin Tin; Naing, Thinn Thu

    2011-01-01

    Design and architecture of cloud storage system plays a vital role in cloud computing infrastructure in order to improve the storage capacity as well as cost effectiveness. Usually cloud storage system provides users to efficient storage space with elasticity feature. One of the challenges of cloud storage system is difficult to balance the providing huge elastic capacity of storage and investment of expensive cost for it. In order to solve this issue in the cloud storage infrastructure, low ...

  20. Architecture of the APS real-time orbit feedback system

    Carwardine, J. A.; Lenkszus, F. R.

    1997-01-01

    The APS Real-Time Orbit Feedback System is designed to stabilize the orbit of the stored positron beam against low-frequency sources such as mechanical vibration and power supply ripple. A distributed array of digital signal processors is used to measure the orbit and compute corrections at a 1kHz rate. The system also provides extensive beam diagnostic tools. This paper describes the architectural aspects of the system and describes how the orbit correction algorithms are implemented

  1. The Flask Security Architecture: System Support for Diverse Security Policies

    2006-01-01

    Flask microkernel -based operating sys­ tem, that successfully overcomes these obstacles to pol- icy flexibility. The cleaner separation of mechanism and...other object managers in the system to en- force those access control decisions. Although the pro­ totype system is microkernel -based, the security...mecha­ nisms do not depend on a microkernel architecture and will easily generalize beyond it. The resulting system provides policy flexibility. It sup

  2. Architecture of the APS real-time orbit feedback system.

    Carwardine, J. A.; Lenkszus, F. R.

    1997-11-21

    The APS Real-Time Orbit Feedback System is designed to stabilize the orbit of the stored positron beam against low-frequency sources such as mechanical vibration and power supply ripple. A distributed array of digital signal processors is used to measure the orbit and compute corrections at a 1kHz rate. The system also provides extensive beam diagnostic tools. This paper describes the architectural aspects of the system and describes how the orbit correction algorithms are implemented.

  3. Systemic Risk and Optimal Regulatory Architecture

    Espinosa-Vega, M.A.; Kahn, C.; Matta, R.; Sole, J.

    2011-01-01

    Until the recent financial crisis, the safety and soundness of financial institutions was assessed from the perspective of the individual institution. The financial crisis highlighted the need to take systemic externalities seriously when rethinking prudential oversight and the regulatory

  4. Agents in an Integrated System Architecture

    Hartvig, Susanne C; Andersen, Tom

    1997-01-01

    This paper presents research findings from development of an expert system and its integration into an integrated environment. Expert systems has proven hard to integrate because of their interactive nature. A prototype environment was developed using new integration technologies, and research...... findings concerning the use of OLE technology to integrate stand alone applications are discussed. The prototype shows clear advantages of using OLE technology when developing integrated environments....

  5. ARCHITECTURE OF WEB BASED COMPUTER-AIDED MANUFACTURING SYSTEM

    N. E. Filyukov

    2014-09-01

    Full Text Available The paper deals with design of a web-based system for Computer-Aided Manufacturing (CAM. Remote applications and databases located in the "private cloud" are proposed to be the basis of such system. The suggested approach contains: service - oriented architecture, using web applications and web services as modules, multi-agent technologies for implementation of information exchange functions between the components of the system and the usage of PDM - system for managing technology projects within the CAM. The proposed architecture involves CAM conversion into the corporate information system that will provide coordinated functioning of subsystems based on a common information space, as well as parallelize collective work on technology projects and be able to provide effective control of production planning. A system has been developed within this architecture which gives the possibility for a rather simple technological subsystems connect to the system and implementation of their interaction. The system makes it possible to produce CAM configuration for a particular company on the set of developed subsystems and databases specifying appropriate access rights for employees of the company. The proposed approach simplifies maintenance of software and information support for CAM subsystems due to their central location in the data center. The results can be used as a basis for CAM design and testing within the learning process for development and modernization of the system algorithms, and then can be tested in the extended enterprise.

  6. Exploring a model-driven architecture (MDA) approach to health care information systems development.

    Raghupathi, Wullianallur; Umar, Amjad

    2008-05-01

    To explore the potential of the model-driven architecture (MDA) in health care information systems development. An MDA is conceptualized and developed for a health clinic system to track patient information. A prototype of the MDA is implemented using an advanced MDA tool. The UML provides the underlying modeling support in the form of the class diagram. The PIM to PSM transformation rules are applied to generate the prototype application from the model. The result of the research is a complete MDA methodology to developing health care information systems. Additional insights gained include development of transformation rules and documentation of the challenges in the application of MDA to health care. Design guidelines for future MDA applications are described. The model has the potential for generalizability. The overall approach supports limited interoperability and portability. The research demonstrates the applicability of the MDA approach to health care information systems development. When properly implemented, it has the potential to overcome the challenges of platform (vendor) dependency, lack of open standards, interoperability, portability, scalability, and the high cost of implementation.

  7. A real-time photogrammetry system based on embedded architecture

    S. Y. Zheng

    2014-06-01

    Full Text Available In order to meet the demand of real-time spatial data processing and improve the online processing capability of photogrammetric system, a kind of real-time photogrammetry method is proposed in this paper. According to the proposed method, system based on embedded architecture is then designed: using FPGA, ARM+DSP and other embedded computing technology to build specialized hardware operating environment, transplanting and optimizing the existing photogrammetric algorithm to the embedded system, and finally real-time photogrammetric data processing is realized. At last, aerial photogrammetric experiment shows that the method can achieve high-speed and stable on-line processing of photogrammetric data. And the experiment also verifies the feasibility of the proposed real-time photogrammetric system based on embedded architecture. It is the first time to realize real-time aerial photogrammetric system, which can improve the online processing efficiency of photogrammetry to a higher level and broaden the application field of photogrammetry.

  8. A Secure System Architecture for Measuring Instruments in Legal Metrology

    Daniel Peters

    2015-03-01

    Full Text Available Embedded systems show the tendency of becoming more and more connected. This fact combined with the trend towards the Internet of Things, from which measuring instruments are not immune (e.g., smart meters, lets one assume that security in measuring instruments will inevitably play an important role soon. Additionally, measuring instruments have adopted general-purpose operating systems to offer the user a broader functionality that is not necessarily restricted towards measurement alone. In this paper, a flexible software system architecture is presented that addresses these challenges within the framework of essential requirements laid down in the Measuring Instruments Directive of the European Union. This system architecture tries to eliminate the risks general-purpose operating systems have by wrapping them, together with dedicated applications, in secure sandboxes, while supervising the communication between the essential parts and the outside world.

  9. Water Recovery System Architecture and Operational Concepts to Accommodate Dormancy

    Carter, Layne; Tabb, David; Anderson, Molly

    2017-01-01

    Future manned missions beyond low Earth orbit will include intermittent periods of extended dormancy. The mission requirement includes the capability for life support systems to support crew activity, followed by a dormant period of up to one year, and subsequently for the life support systems to come back online for additional crewed missions. NASA personnel are evaluating the architecture and operational concepts that will allow the Water Recovery System (WRS) to support such a mission. Dormancy could be a critical issue due to concerns with microbial growth or chemical degradation that might prevent water systems from operating properly when the crewed mission began. As such, it is critical that the water systems be designed to accommodate this dormant period. This paper identifies dormancy issues, concepts for updating the WRS architecture and operational concepts that will enable the WRS to support the dormancy requirement.

  10. Discrete optimization in architecture extremely modular systems

    Zawidzki, Machi

    2017-01-01

    This book is comprised of two parts, both of which explore modular systems: Pipe-Z (PZ) and Truss-Z (TZ), respectively. It presents several methods of creating PZ and TZ structures subjected to discrete optimization. The algorithms presented employ graph-theoretic and heuristic methods. The underlying idea of both systems is to create free-form structures using the minimal number of types of modular elements. PZ is more conceptual, as it forms single-branch mathematical knots with a single type of module. Conversely, TZ is a skeletal system for creating free-form pedestrian ramps and ramp networks among any number of terminals in space. In physical space, TZ uses two types of modules that are mirror reflections of each other. The optimization criteria discussed include: the minimal number of units, maximal adherence to the given guide paths, etc.

  11. Trust-based information system architecture for personal wellness.

    Ruotsalainen, Pekka; Nykänen, Pirkko; Seppälä, Antto; Blobel, Bernd

    2014-01-01

    Modern eHealth, ubiquitous health and personal wellness systems take place in an unsecure and ubiquitous information space where no predefined trust occurs. This paper presents novel information model and an architecture for trust based privacy management of personal health and wellness information in ubiquitous environment. The architecture enables a person to calculate a dynamic and context-aware trust value for each service provider, and using it to design personal privacy policies for trustworthy use of health and wellness services. For trust calculation a novel set of measurable context-aware and health information-sensitive attributes is developed. The architecture enables a person to manage his or her privacy in ubiquitous environment by formulating context-aware and service provider specific policies. Focus groups and information modelling was used for developing a wellness information model. System analysis method based on sequential steps that enable to combine results of analysis of privacy and trust concerns and the selection of trust and privacy services was used for development of the information system architecture. Its services (e.g. trust calculation, decision support, policy management and policy binding services) and developed attributes enable a person to define situation-aware policies that regulate the way his or her wellness and health information is processed.

  12. SAFARI optical system architecture and design concept

    Pastor, Carmen; Jellema, Willem; Zuluaga-Ramírez, Pablo; Arrazola, David; Fernández-Rodriguez, M.; Belenguer, Tomás; González Fernández, Luis M.; Audley, Michael D.; Evers, Jaap; Eggens, Martin; Torres Redondo, Josefina; Najarro, Francisco; Roelfsema, P.

    2016-01-01

    SpicA FAR infrared Instrument, SAFARI, is one of the instruments planned for the SPICA mission. The SPICA mission is the next great leap forward in space-based far-infrared astronomy and will study the evolution of galaxies, stars and planetary systems. SPICA will utilize a deeply cooled 2.5m-class

  13. A fully defined and scalable 3D culture system for human pluripotent stem cell expansion and differentiation

    Lei, Yuguo; Schaffer, David V.

    2013-12-01

    Human pluripotent stem cells (hPSCs), including human embryonic stem cells and induced pluripotent stem cells, are promising for numerous biomedical applications, such as cell replacement therapies, tissue and whole-organ engineering, and high-throughput pharmacology and toxicology screening. Each of these applications requires large numbers of cells of high quality; however, the scalable expansion and differentiation of hPSCs, especially for clinical utilization, remains a challenge. We report a simple, defined, efficient, scalable, and good manufacturing practice-compatible 3D culture system for hPSC expansion and differentiation. It employs a thermoresponsive hydrogel that combines easy manipulation and completely defined conditions, free of any human- or animal-derived factors, and entailing only recombinant protein factors. Under an optimized protocol, the 3D system enables long-term, serial expansion of multiple hPSCs lines with a high expansion rate (∼20-fold per 5-d passage, for a 1072-fold expansion over 280 d), yield (∼2.0 × 107 cells per mL of hydrogel), and purity (∼95% Oct4+), even with single-cell inoculation, all of which offer considerable advantages relative to current approaches. Moreover, the system enabled 3D directed differentiation of hPSCs into multiple lineages, including dopaminergic neuron progenitors with a yield of ∼8 × 107 dopaminergic progenitors per mL of hydrogel and ∼80-fold expansion by the end of a 15-d derivation. This versatile system may be useful at numerous scales, from basic biological investigation to clinical development.

  14. PLEIADES SYSTEM ARCHITECTURE AND MAIN PERFORMANCES

    M. A. Gleyzes

    2012-07-01

    Full Text Available France, under the leadership of the French Space Agency (CNES, has set up a cooperative program with Austria, Belgium, Spain, Sweden, in order to develop a space Earth Observation system called PLEIADES. PLEIADES is a dual system, this means that it is intended to fulfill an extended panel of both civilian and Defense user’s needs.. This paper reports the status of the satellite after its launch and the in orbit commissioning, the PLEIADES satellite first model has been launched at the end of year 2011, the second model will be launched about 12 months later. It describes the main mission characteristics and performances status. It exposes how the system, satellite and ground segment have been designed in order to be compliant with a dual exploitation between civilian and defense partners. The system is based on the use of a set of newly European developed technologies to feature the satellite. In order to maximize the agility of the satellite, weight and inertia have been reduced using a compact hexagonal shape for the satellite bus. The optical mission consists in Earth optical observation composed of 0.7 m nadir resolution for the panchromatic band and 2.8 m nadir resolution for the four multi-spectral bands. The image swath is about 20 km. PLEIADES delivers optical high resolution products consisting in a Panchromatic image, into which is merged a four multispectral bands image, orthorectified on a Digital Terrain Model (DTM. Thanks to the huge satellite agility obtained with control momentum gyros as actuators, the optical system delivers as well instantaneous stereo images, under different stereoscopic conditions and mosaic images, issued from along the track thus enlarging the field of view. The ground segment is composed of a dual ground center located in CNES Toulouse premises in charge of preparing the dual mission command plan and of the real time contacts with the satellite through a control center. The dual ground center

  15. Advanced information processing system for advanced launch system: Avionics architecture synthesis

    Lala, Jaynarayan H.; Harper, Richard E.; Jaskowiak, Kenneth R.; Rosch, Gene; Alger, Linda S.; Schor, Andrei L.

    1991-01-01

    The Advanced Information Processing System (AIPS) is a fault-tolerant distributed computer system architecture that was developed to meet the real time computational needs of advanced aerospace vehicles. One such vehicle is the Advanced Launch System (ALS) being developed jointly by NASA and the Department of Defense to launch heavy payloads into low earth orbit at one tenth the cost (per pound of payload) of the current launch vehicles. An avionics architecture that utilizes the AIPS hardware and software building blocks was synthesized for ALS. The AIPS for ALS architecture synthesis process starting with the ALS mission requirements and ending with an analysis of the candidate ALS avionics architecture is described.

  16. Control Architecture Modeling for Future Power Systems

    Heussen, Kai

    electricity exchange. However, at the same time, it seems that the overall system design cannot keep up by simply adapting in response to changes, but that also new strategies have to be designed in anticipation. Changes to the electricity markets have been suggested to adapt to the limited predictability...... of wind power, and several new control strategies have been proposed, in particular to enable the control of distributed energy resources, including for example, distributed generation or electric vehicles. Market designs addressing the procurement of balancing resources are highly dependent...... on the operation strategies specifying the resource requirements. How should one decide which control strategy and market configuration is best for a future power system? Most research up to this point has addressed single isolated aspects of this design problem. Those of the ideas that fit with current markets...

  17. APT LLRF control system functionality and architecture

    Regan, A.H.; Rohlev, A.S.; Ziomek, C.D.

    1996-01-01

    1% amplitude and l degree phase. The feedback control system requires a phase-stable RF reference subsystem signal to correctly phase each cavity. Also, instead of a single klystron RF source for individual accelerating cavities, multiple klystrons will drive a string of resonantly coupled cavities, based on input from a single LLRF feedback control system. To achieve maximum source efficiency, we will be employing single fast feedback controls around individual klystrons such that the gain and phase characteristics of each will be ''identical.'' In addition, resonance control is performed by providing a proper drive signal to structure cooling water valves in order to keep the cavity resonant during operation. To quickly respond to RF shutdowns, and hence rapid accelerating cavity cool- down, due to RF fault conditions, drive frequency agility in the main feedback control subsystem will also be incorporated. Top level block diagrams will be presented and described for each of the aforementioned subsystems as they will first be developed and demonstrated on the Low Energy Demonstrator Accelerator (LEDA) The low-level RF (LLRF) control system for the Accelerator Production of Tritium (APT) will perform various functions. Foremost is the feedback control of the accelerating fields within the cavity in order to maintain field stability within

  18. APT LLRF control system functionality and architecture

    Regan, A.H.; Rohlev, A.S.; Ziomek, C.D.

    1996-01-01

    The low-level RF (LLRF) control system for the Accelerator Production of Tritium (APT) will perform various functions. Foremost is the feedback control of the accelerating fields within the cavity in order to maintain field stability within ± 1% amplitude and 1 degree phase. The feedback control system requires a phase-stable RF reference subsystem signal to correctly phase each cavity. Also, instead of a single klystron RF source for individual accelerating cavities, multiple klystrons will drive a string of resonantly coupled cavities, based on input from a single LLRF feedback control system. To achieve maximum source efficiency, we will be employing single fast feedback controls around individual klystrons such that the gain and phase characteristics of each will be 'identical'. In addition, the resonance condition of the cavities is monitored and maintained. To quickly respond to RF shutdowns, and hence rapid accelerating cavity cool-down, due to RF fault conditions, drive frequency agility in the main feedback control subsystem will also be incorporated. Top level block diagrams will be presented and described as they will first be developed and demonstrated on the Low Energy Demonstrator Accelerator (LEDA). (author)

  19. Memory intensive functional architecture for distributed computer control systems

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector

  20. A software architecture for adaptive modular sensing systems.

    Lyle, Andrew C; Naish, Michael D

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.

  1. A Software Architecture for Adaptive Modular Sensing Systems

    Andrew C. Lyle

    2010-08-01

    Full Text Available By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.

  2. The Architecture and Administration of the ATLAS Online Computing System

    Dobson, M; Ertorer, E; Garitaonandia, H; Leahu, L; Leahu, M; Malciu, I M; Panikashvili, E; Topurov, A; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The needs of ATLAS experiment at the upcoming LHC accelerator, CERN, in terms of data transmission rates and processing power require a large cluster of computers (of the order of thousands) administrated and exploited in a coherent and optimal manner. Requirements like stability, robustness and fast recovery in case of failure impose a server-client system architecture with servers distributed in a tree like structure and clients booted from the network. For security reasons, the system should be accessible only through an application gateway and, also to ensure the autonomy of the system, the network services should be provided internally by dedicated machines in synchronization with CERN IT department's central services. The paper describes a small scale implementation of the system architecture that fits the given requirements and constraints. Emphasis will be put on the mechanisms and tools used to net boot the clients via the "Boot With Me" project and to synchronize information within the cluster via t...

  3. Reducing Development and Operations Costs using NASA's "GMSEC" Systems Architecture

    Smith, Dan; Bristow, John; Crouse, Patrick

    2007-01-01

    This viewgraph presentation reviews the role of Goddard Mission Services Evolution Center (GMSEC) in reducing development and operation costs in handling the massive data from NASA missions. The goals of GMSEC systems architecture development are to (1) Simplify integration and development, (2)Facilitate technology infusion over time, (3) Support evolving operational concepts, and (4) All for mix of heritage, COTS and new components. First 3 missions (i.e., Tropical Rainforest Measuring Mission (TRMM), Small Explorer (SMEX) missions - SWAS, TRACE, SAMPEX, and ST5 3-Satellite Constellation System) each selected a different telemetry and command system. These results show that GMSEC's message-bus component-based framework architecture is well proven and provides significant benefits over traditional flight and ground data system designs. The missions benefit through increased set of product options, enhanced automation, lower cost and new mission-enabling operations concept options .

  4. A Proposed Information Architecture for Telehealth System Interoperability

    Craft, R.L.; Funkhouser, D.R.; Gallagher, L.K.; Garica, R.J.; Parks, R.C.; Warren, S.

    1999-04-20

    We propose an object-oriented information architecture for telemedicine systems that promotes secure `plug-and-play' interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a ''lego-like'' fashion to achieve the desired device or system functionality. Introduction Telemedicine systems today rely increasingly on distributed, collaborative information technology during the care delivery process. While these leading-edge systems are bellwethers for highly advanced telemedicine, most are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that a single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver en- tire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. This paper proposes a reference architecture for plug-and-play telemedicine systems that addresses these issues.

  5. A Proposed Information Architecture for Telehealth System Interoperability

    Warren, S.; Craft, R.L.; Parks, R.C.; Gallagher, L.K.; Garcia, R.J.; Funkhouser, D.R.

    1999-04-07

    Telemedicine technology is rapidly evolving. Whereas early telemedicine consultations relied primarily on video conferencing, consultations today may utilize video conferencing, medical peripherals, store-and-forward capabilities, electronic patient record management software, and/or a host of other emerging technologies. These remote care systems rely increasingly on distributed, collaborative information technology during the care delivery process, in its many forms. While these leading-edge systems are bellwethers for highly advanced telemedicine, the remote care market today is still immature. Most telemedicine systems are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that a single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver entire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. We propose a secure, object-oriented information architecture for telemedicine systems that promotes plug-and-play interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a lego-like fashion to achieve the desired device or system functionality. The architecture will support various ongoing standards work in the medical device arena.

  6. Development of modular scalable pulsed power systems for high power magnetized plasma experiments

    Bean, I. A.; Weber, T. E.; Adams, C. S.; Henderson, B. R.; Klim, A. J.

    2017-10-01

    New pulsed power switches and trigger drivers are being developed in order to explore higher energy regimes in the Magnetic Shock Experiment (MSX) at Los Alamos National Laboratory. To achieve the required plasma velocities, high-power (approx. 100 kV, 100s of kA), high charge transfer (approx. 1 C), low-jitter (few ns) gas switches are needed. A study has been conducted on the effects of various electrode geometries and materials, dielectric media, and triggering strategies; resulting in the design of a low-inductance annular field-distortion switch, optimized for use with dry air at 90 psig, and triggered by a low-jitter, rapid rise-time solid-state Linear Transformer Driver. The switch geometry and electrical characteristics are designed to be compatible with Syllac style capacitors, and are intended to be deployed in modular configurations. The scalable nature of this approach will enable the rapid design and implementation of a wide variety of high-power magnetized plasma experiments. This work is supported by the U.S. Department of Energy, National Nuclear Security Administration. Approved for unlimited release, LA-UR-17-2578.

  7. Flexible Architecture for FPGAs in Embedded Systems

    Clark, Duane I.; Lim, Chester N.

    2012-01-01

    Commonly, field-programmable gate arrays (FPGAs) being developed in cPCI embedded systems include the bus interface in the FPGA. This complicates the development because the interface is complicated and requires a lot of development time and FPGA resources. In addition, flight qualification requires a substantial amount of time be devoted to just this interface. Another complication of putting the cPCI interface into the FPGA being developed is that configuration information loaded into the device by the cPCI microprocessor is lost when a new bit file is loaded, requiring cumbersome operations to return the system to an operational state. Finally, SRAM-based FPGAs are typically programmed via specialized cables and software, with programming files being loaded either directly into the FPGA, or into PROM devices. This can be cumbersome when doing FPGA development in an embedded environment, and does not have an easy path to flight. Currently, FPGAs used in space applications are usually programmed via multiple space-qualified PROM devices that are physically large and require extra circuitry (typically including a separate one-time programmable FPGA) to enable them to be used for this application. This technology adds a cPCI interface device with a simple, flexible, high-performance backend interface supporting multiple backend FPGAs. It includes a mechanism for programming the FPGAs directly via the microprocessor in the embedded system, eliminating specialized hardware, software, and PROM devices and their associated circuitry. It has a direct path to flight, and no extra hardware and minimal software are required to support reprogramming in flight. The device added is currently a small FPGA, but an advantage of this technology is that the design of the device does not change, regardless of the application in which it is being used. This means that it needs to be qualified for flight only once, and is suitable for one-time programmable devices or an application

  8. Robotic control architecture development for automated nuclear material handling systems

    Merrill, R.D.; Hurd, R.; Couture, S.; Wilhelmsen, K.

    1995-02-01

    Lawrence Livermore National Laboratory (LLNL) is engaged in developing automated systems for handling materials for mixed waste treatment, nuclear pyrochemical processing, and weapon components disassembly. In support of these application areas there is an extensive robotic development program. This paper will describe the portion of this effort at LLNL devoted to control system architecture development, and review two applications currently being implemented which incorporate these technologies

  9. A new architecture for Fermilab's cryogenic control system

    Smolucha, J.; Frank, A.; Seino, K.; Lackey, S.

    1992-01-01

    In order to achieve design energy in the Tevatron, the magnet system will be operated at lower temperatures. The increased requirements of operating the Tevatron at lower temperatures necessitated a major upgrade to the both the hardware and software components of the cryogenic control system. The new architecture is based on a distributed topology which couples Fermilab designed I/O subsystems to high performance, 80386 execution processors via a variety of networks including: Arcnet, iPSB, and token ring. (author)

  10. System Architecture Modeling for Technology Portfolio Management using ATLAS

    Thompson, Robert W.; O'Neil, Daniel A.

    2006-01-01

    Strategic planners and technology portfolio managers have traditionally relied on consensus-based tools, such as Analytical Hierarchy Process (AHP) and Quality Function Deployment (QFD) in planning the funding of technology development. While useful to a certain extent, these tools are limited in the ability to fully quantify the impact of a technology choice on system mass, system reliability, project schedule, and lifecycle cost. The Advanced Technology Lifecycle Analysis System (ATLAS) aims to provide strategic planners a decision support tool for analyzing technology selections within a Space Exploration Architecture (SEA). Using ATLAS, strategic planners can select physics-based system models from a library, configure the systems with technologies and performance parameters, and plan the deployment of a SEA. Key parameters for current and future technologies have been collected from subject-matter experts and other documented sources in the Technology Tool Box (TTB). ATLAS can be used to compare the technical feasibility and economic viability of a set of technology choices for one SEA, and compare it against another set of technology choices or another SEA. System architecture modeling in ATLAS is a multi-step process. First, the modeler defines the system level requirements. Second, the modeler identifies technologies of interest whose impact on an SEA. Third, the system modeling team creates models of architecture elements (e.g. launch vehicles, in-space transfer vehicles, crew vehicles) if they are not already in the model library. Finally, the architecture modeler develops a script for the ATLAS tool to run, and the results for comparison are generated.

  11. Web based system architecture for long pulse remote experimentation

    Heras, E. de las; Lastra, D.; Vega, J.; Castro, R.; Ruiz, M.; Barrera, E.

    2010-01-01

    Remote experimentation (RE) methods will be essential in next generation fusion devices. Requirements for long pulse RE will be: on-line data visualization, on-line data acquisition processes monitoring and on-line data acquisition systems interactions (start, stop or set-up modifications). Note that these methods are not oriented to real-time control of fusion plant devices. INDRA Sistemas S.A., CIEMAT (Centro de Investigaciones Energeticas Medioambientales y Tecnologicas) and UPM (Universidad Politecnica de Madrid) have designed a specific software architecture for these purposes. The architecture can be supported on the BeansNet platform, whose integration with an application server provides an adequate solution to the requirements. BeansNet is a JINI based framework developed by INDRA, which makes easy the implementation of a remote experimentation model based on a Service Oriented Architecture. The new software architecture has been designed on the basis of the experience acquired in the development of an upgrade of the TJ-II remote experimentation system.

  12. Modular Integrated Monitoring System (MIMS). Architecture and implementation

    Funkhouser, D.R.; Davidson, G.W.; Deland, S.M.

    1999-01-01

    The MIMS is being developed as a cost-effective means of performing safeguards in unattended remote monitoring applications. Based on industry standards and an open systems approach, the MIMS architecture supports both data acquisition and data review subsystems. Data includes images as well as discrete and analog sensor outputs. The MIMS uses an Echelon LonWorks network as a standard means and method of data acquisition from the sensor. A common data base not only stores sensor and image data but also provides a structure by which dynamic changes to the sensor system can be reflected in the data acquisition and data review subsystems without affecting the execution software. The architecture includes standards for wide area communications between data acquisition systems and data review systems. Data authentication is provided as an integral part of the design. The MIMS also provides a generic set of tools for analyzing both system behavior and observed events. The MIMS software implements this architecture by combining the use of commercial applications with a set of custom 16 and 32 bit Microsoft Windows applications which are run under Windows NT and Windows 95 operating systems. (author)

  13. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME.

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2016-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.

  14. Architecture-driven Migration of Legacy Systems to Cloud-enabled Software

    Ahmad, Aakash; Babar, Muhammad Ali

    2014-01-01

    of legacy systems to cloud computing. The framework leverages the software reengineering concepts that aim to recover the architecture from legacy source code. Then the framework exploits the software evolution concepts to support architecture-driven migration of legacy systems to cloud-based architectures....... The Legacy-to-Cloud Migration Horseshoe comprises of four processes: (i) architecture migration planning, (ii) architecture recovery and consistency, (iii) architecture transformation and (iv) architecture-based development of cloud-enabled software. We aim to discover, document and apply the migration...

  15. Architectural Analysis of Complex Evolving Systems of Systems

    Lindvall, Mikael; Stratton, William C.; Sibol, Deane E.; Ray, Arnab; Ackemann, Chris; Yonkwa, Lyly; Ganesan, Dharma

    2009-01-01

    The goal of this collaborative project between FC-MD, APL, and GSFC and supported by NASA IV&V Software Assurance Research Program (SARP), was to develop a tool, Dynamic SAVE, or Dyn-SAVE for short, for analyzing architectures of systems of systems. The project team was comprised of the principal investigator (PI) from FC-MD and four other FC-MD scientists (part time) and several FC-MD students (full time), as well as, two APL software architects (part time), and one NASA POC (part time). The PI and FC-MD scientists together with APL architects were responsible for requirements analysis, and for applying and evaluating the Dyn-SAVE tool and method. The PI and a group of FC-MD scientists were responsible for improving the method and conducting outreach activities, while another group of FC-MD scientists were responsible for development and improvement of the tool. Oversight and reporting was conducted by the PI and NASA POC. The project team produced many results including several prototypes of the Dyn-SAVE tool and method, several case studies documenting how the tool and method was applied to APL s software systems, and several published papers in highly respected conferences and journals. Dyn-SAVE as developed and enhanced throughout this research period, is a software tool intended for software developers and architects, software integration testers, and persons who need to analyze software systems from the point of view of how it communicates with other systems. Using the tool, the user specifies the planned communication behavior of the system modeled as a sequence diagram. The user then captures and imports the actual communication behavior of the system, which is then converted and visualized as a sequence diagram by Dyn-SAVE. After mapping the planned to the actual and specifying parameter and timing constraints, Dyn-SAVE detects and highlights deviations between the planned and the actual behavior. Requirements based on the need to analyze two inter-system

  16. Integrating Environmental and Information Systems Management: An Enterprise Architecture Approach

    Noran, Ovidiu

    Environmental responsibility is fast becoming an important aspect of strategic management as the reality of climate change settles in and relevant regulations are expected to tighten significantly in the near future. Many businesses react to this challenge by implementing environmental reporting and management systems. However, the environmental initiative is often not properly integrated in the overall business strategy and its information system (IS) and as a result the management does not have timely access to (appropriately aggregated) environmental information. This chapter argues for the benefit of integrating the environmental management (EM) project into the ongoing enterprise architecture (EA) initiative present in all successful companies. This is done by demonstrating how a reference architecture framework and a meta-methodology using EA artefacts can be used to co-design the EM system, the organisation and its IS in order to achieve a much needed synergy.

  17. Wireless Power Transfer System Architectures for Portable or Implantable Applications

    Yan Lu

    2016-12-01

    Full Text Available This paper discusses the near-field inductive coupling wireless power transfer (WPT at the system level, with detailed analyses on each state-of-the-art WPT output voltage regulation topologies. For device miniaturization and power loss reduction, several novel architectures for efficient WPT were proposed in recent years to reduce the number of passive components as well as to improve the system efficiency or flexibility. These schemes are systematically studied and discussed in this paper. The main contribution of this paper is to provide design guidelines for WPT system design. In addition, possible combinations of the WPT building block configurations are summarized, compared, and investigated for potential new architectures.

  18. A modeling process to understand complex system architectures

    Robinson, Santiago Balestrini

    2009-12-01

    In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two

  19. Modular Integrated Monitoring System (MIMS) - architecture and implementation

    Funkhouser, D.R.; Davidson, G.W.; Deland, S.M.

    1997-01-01

    The MIMS is being developed as a cost-effective means of performing safeguards in unattended remote monitoring applications. Based on industry standards and an open systems approach, the MIMS architecture supports both data acquisition and data review subsystems. Data includes images as well as discrete and analog sensor outputs. The MIMS uses an Echelon LonWorks network as a standard means and method of data acquisition from the sensor. A common data base not only stores sensor and image data but also provides a structure by which dynamic changes to the sensor system can be reflected in the data acquisition and data review subsystems without affecting the execution software. The architecture includes standards for wide area communications between data acquisition systems and data review systems. Data authentication is provided as an integral part of the design. The MIMS software implements this architecture by combining the use of commercial applications with a set of custom 16 and 32 bit Microsoft Windows applications which are run under Windows NT and Windows 95 operating systems

  20. Architecture for WSN Nodes Integration in Context Aware Systems Using Semantic Messages

    Larizgoitia, Iker; Muguira, Leire; Vazquez, Juan Ignacio

    Wireless sensor networks (WSN) are becoming extremely popular in the development of context aware systems. Traditionally WSN have been focused on capturing data, which was later analyzed and interpreted in a server with more computational power. In this kind of scenario the problem of representing the sensor information needs to be addressed. Every node in the network might have different sensors attached; therefore their correspondent packet structures will be different. The server has to be aware of the meaning of every single structure and data in order to be able to interpret them. Multiple sensors, multiple nodes, multiple packet structures (and not following a standard format) is neither scalable nor interoperable. Context aware systems have solved this problem with the use of semantic technologies. They provide a common framework to achieve a standard definition of any domain. Nevertheless, these representations are computationally expensive, so a WSN cannot afford them. The work presented in this paper tries to bridge the gap between the sensor information and its semantic representation, by defining a simple architecture that enables the definition of this information natively in a semantic way, achieving the integration of the semantic information in the network packets. This will have several benefits, the most important being the possibility of promoting every WSN node to a real semantic information source.

  1. A programmable display layer for virtual reality system architectures.

    Smit, Ferdi Alexander; van Liere, Robert; Froehlich, Bernd

    2010-01-01

    Display systems typically operate at a minimum rate of 60 Hz. However, existing VR-architectures generally produce application updates at a lower rate. Consequently, the display is not updated by the application every display frame. This causes a number of undesirable perceptual artifacts. We describe an architecture that provides a programmable display layer (PDL) in order to generate updated display frames. This replaces the default display behavior of repeating application frames until an update is available. We will show three benefits of the architecture typical to VR. First, smooth motion is provided by generating intermediate display frames by per-pixel depth-image warping using 3D motion fields. Smooth motion eliminates various perceptual artifacts due to judder. Second, we implement fine-grained latency reduction at the display frame level using a synchronized prediction of simulation objects and the viewpoint. This improves the average quality and consistency of latency reduction. Third, a crosstalk reduction algorithm for consecutive display frames is implemented, which improves the quality of stereoscopic images. To evaluate the architecture, we compare image quality and latency to that of a classic level-of-detail approach.

  2. New optical architecture for holographic data storage system compatible with Blu-ray Disc™ system

    Shimada, Ken-ichi; Ide, Tatsuro; Shimano, Takeshi; Anderson, Ken; Curtis, Kevin

    2014-02-01

    A new optical architecture for holographic data storage system which is compatible with a Blu-ray Disc™ (BD) system is proposed. In the architecture, both signal and reference beams pass through a single objective lens with numerical aperture (NA) 0.85 for realizing angularly multiplexed recording. The geometry of the architecture brings a high affinity with an optical architecture in the BD system because the objective lens can be placed parallel to a holographic medium. Through the comparison of experimental results with theory, the validity of the optical architecture was verified and demonstrated that the conventional objective lens motion technique in the BD system is available for angularly multiplexed recording. The test-bed composed of a blue laser system and an objective lens of the NA 0.85 was designed. The feasibility of its compatibility with BD is examined through the designed test-bed.

  3. Distributed expert system architecture using a dedicated knowledge server

    Trovato, S.A.; Lindgren, B.M.; Touchton, R.A.

    1991-01-01

    This paper presents an up-to-date look at REALM, the Reactor Emergency Action Level Monitor Expert Advisor System, including recent innovations in the system architecture and our approach to Verification and Validation (V and V). The emergency classification domain is reviewed and the problem, solution and benefits are outlined. A REALM system description is then presented, followed by a description of the REALM V and V approach. The paper concludes with a look at how REALM is being generalized to embrace plant sensor interpretation beyond emergency classification (e.g. On-line Tech Spec or thermal performance monitoring) under the name of OASYS, for On-line Advisory System

  4. Developments in architecture for real-time data systems

    Heath, R.L.; Myers, W.R.

    1975-01-01

    Real-time data systems typically operate at two levels: a fast-response instrument-oriented level for data acquisition and control, and a slow human-oriented level for interaction and computation. Traditional minicomputer data systems support real-time applications by implementation of background/foreground software. Recent developments in computer technology including microprocessors enable the functional organization of hardware in distributed or hierarchical form to provide new system structures for real-time requirements. Examples of systems with distributed architecture will be discussed in detail

  5. Highly Adjustable Systems: An Architecture for Future Space Observatories

    Arenberg, Jonathan; Conti, Alberto; Redding, David; Lawrence, Charles R.; Hachkowski, Roman; Laskin, Robert; Steeves, John

    2017-06-01

    Mission costs for ground breaking space astronomical observatories are increasing to the point of unsustainability. We are investigating the use of adjustable or correctable systems as a means to reduce development and therefore mission costs. The poster introduces the promise and possibility of realizing a “net zero CTE” system for the general problem of observatory design and introduces the basic systems architecture we are considering. This poster concludes with an overview of our planned study and demonstrations for proving the value and worth of highly adjustable telescopes and systems ahead of the upcoming decadal survey.

  6. Wireless coordinated multicell systems architectures and precoding designs

    Nguyen, Duy H N

    2014-01-01

    This SpringerBrief discusses the current research on coordinated multipoint transmission/reception (CoMP) in wireless multi-cell systems. This book analyzes the structure of the CoMP precoders and the message exchange mechanism in the CoMP system in order to reveal the advantage of CoMP. Topics include interference management in wireless cellular networks, joint signal processing, interference coordination, uplink and downlink precoding and system models. After an exploration of the motivations and concepts of CoMP, the authors present the architectures of a CoMP system. Practical implementati

  7. Symmetry in quantum system theory: Rules for quantum architecture design

    Schulte-Herbrueggen, Thomas; Sander, Uwe [Technical University of Munich, Garching (Germany). Dept. Chem.

    2010-07-01

    We investigate universality in the sense of controllability and observability, of multi-qubit systems in architectures of various symmetries of coupling type and topology. By determining the respective dynamic system Lie algebras, explicit reachability sets under symmetry constraints are provided. Thus for a given (possibly symmetric) experimental coupling architecture several decision problems can be solved in a unified way: (i) can a target Hamiltonian be simulated? (ii) can a target gate be synthesised? (iii) to which extent is the system observable by a given set of detection operators? and, as a special case of the latter, (iv) can an underlying system Hamiltonian be identified with a given set of detection operators? Finally, in turn, the absence of symmetry provides a convenient necessary condition for full controllability. Though often easier to assess than the well-established Lie-algebra rank condition, this is not sufficient unless the candidate dynamic simple Lie algebra can be pre-identified uniquely. Thus for architectures with various Ising and Heisenberg coupling types we give design rules sufficient to ensure full controllability. In view of follow-up studies, we relate the unification of necessary and sufficient conditions for universality to filtering simple Lie subalgebras of su(N) comprising classical and exceptional types.

  8. Architectural frameworks: defining the structures for implementing learning health systems.

    Lessard, Lysanne; Michalowski, Wojtek; Fung-Kee-Fung, Michael; Jones, Lori; Grudniewicz, Agnes

    2017-06-23

    The vision of transforming health systems into learning health systems (LHSs) that rapidly and continuously transform knowledge into improved health outcomes at lower cost is generating increased interest in government agencies, health organizations, and health research communities. While existing initiatives demonstrate that different approaches can succeed in making the LHS vision a reality, they are too varied in their goals, focus, and scale to be reproduced without undue effort. Indeed, the structures necessary to effectively design and implement LHSs on a larger scale are lacking. In this paper, we propose the use of architectural frameworks to develop LHSs that adhere to a recognized vision while being adapted to their specific organizational context. Architectural frameworks are high-level descriptions of an organization as a system; they capture the structure of its main components at varied levels, the interrelationships among these components, and the principles that guide their evolution. Because these frameworks support the analysis of LHSs and allow their outcomes to be simulated, they act as pre-implementation decision-support tools that identify potential barriers and enablers of system development. They thus increase the chances of successful LHS deployment. We present an architectural framework for LHSs that incorporates five dimensions-goals, scientific, social, technical, and ethical-commonly found in the LHS literature. The proposed architectural framework is comprised of six decision layers that model these dimensions. The performance layer models goals, the scientific layer models the scientific dimension, the organizational layer models the social dimension, the data layer and information technology layer model the technical dimension, and the ethics and security layer models the ethical dimension. We describe the types of decisions that must be made within each layer and identify methods to support decision-making. In this paper, we outline

  9. Smart architecture for stable multipoint fiber Bragg grating sensor system

    Yeh, Chien-Hung; Tsai, Ning; Zhuang, Yuan-Hong; Huang, Tzu-Jung; Chow, Chi-Wai; Chen, Jing-Heng; Liu, Wen-Fung

    2017-12-01

    In this work, we propose and investigate an intelligent fiber Bragg grating (FBG)-based sensor system in which the proposed stabilized and wavelength-tunable single-longitudinal-mode erbium-doped fiber laser can improve the sensing accuracy of wavelength-division-multiplexing multiple FBG sensors in a longer fiber transmission distance. Moreover, we also demonstrate the proposed sensor architecture to enhance the FBG capacity for sensing strain and temperature, simultaneously.

  10. Modular open RF architecture: extending VICTORY to RF systems

    Melber, Adam; Dirner, Jason; Johnson, Michael

    2015-05-01

    Radio frequency products spanning multiple functions have become increasingly critical to the warfighter. Military use of the electromagnetic spectrum now includes communications, electronic warfare (EW), intelligence, and mission command systems. Due to the urgent needs of counterinsurgency operations, various quick reaction capabilities (QRCs) have been fielded to enhance warfighter capability. Although these QRCs were highly successfully in their respective missions, they were designed independently resulting in significant challenges when integrated on a common platform. This paper discusses how the Modular Open RF Architecture (MORA) addresses these challenges by defining an open architecture for multifunction missions that decomposes monolithic radio systems into high-level components with welldefined functions and interfaces. The functional decomposition maximizes hardware sharing while minimizing added complexity and cost due to modularization. MORA achieves significant size, weight and power (SWaP) savings by allowing hardware such as power amplifiers and antennas to be shared across systems. By separating signal conditioning from the processing that implements the actual radio application, MORA exposes previously inaccessible architecture points, providing system integrators with the flexibility to insert third-party capabilities to address technical challenges and emerging requirements. MORA leverages the Vehicular Integration for Command, Control, Communication, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR)/EW Interoperability (VICTORY) framework. This paper concludes by discussing how MORA, VICTORY and other standards such as OpenVPX are being leveraged by the U.S. Army Research, Development, and Engineering Command (RDECOM) Communications Electronics Research, Development, and Engineering Center (CERDEC) to define a converged architecture enabling rapid technology insertion, interoperability and reduced SWaP.

  11. Future Manufacturing Systems in Norway – Strategy, Architecture and Framework

    Kolla, Sri Sudha Vijay Keshav

    2016-01-01

    This study investigates the suitability of Cyber Physical Systems (CPS) in Norwegian manufacturing industries and its implementation. This study explores the research and innovation needs in Norway which will be given as inputs to Strategic Research Agenda (SRA) 2030 of European Commission to share future manufacturing strategies in Norway. The objectives of the research are to identifying the opportunities and challenges of CPS, developing a feasible reference architecture of CPS which benef...

  12. NUClear: A Loosely Coupled Software Architecture for Humanoid Robot Systems

    Trent eHouliston

    2016-04-01

    Full Text Available This paper discusses the design and interface of NUClear, a new hybrid message-passing architecture for embodied humanoid robotics. NUClear is modular, low latency and promotes functional and expandable software design. It greatly reduces the latency for messages passed between modules as the messages routes are established at compile time. It also reduces the number of functions that must be written using a system called co-messages which aids in dealing with multiple simultaneous data. NUClear has primarily been evaluated on a humanoid robotic soccer platform and on a robotic boat platform, with evaluations showing that NUClear requires fewer callbacks and cache variables over existing message-passing architectures. NUClear does have limitations when applying these techniques on multi-processed systems. It performs best in lower power systems where computational resources are limited. Future work will focus on applying the architecture to new platforms, including a larger form humanoid platform and a virtual reality platform and further evaluating the impact of the novel techniques introduced.

  13. A multi-agent system architecture for sensor networks.

    Fuentes-Fernández, Rubén; Guijarro, María; Pajares, Gonzalo

    2009-01-01

    The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work.

  14. A Multi-Agent System Architecture for Sensor Networks

    María Guijarro

    2009-12-01

    Full Text Available The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work.

  15. Assessment of the integration capability of system architectures from a complex and distributed software systems perspective

    Leuchter, S.; Reinert, F.; Müller, W.

    2014-06-01

    Procurement and design of system architectures capable of network centric operations demand for an assessment scheme in order to compare different alternative realizations. In this contribution an assessment method for system architectures targeted at the C4ISR domain is presented. The method addresses the integration capability of software systems from a complex and distributed software system perspective focusing communication, interfaces and software. The aim is to evaluate the capability to integrate a system or its functions within a system-of-systems network. This method uses approaches from software architecture quality assessment and applies them on the system architecture level. It features a specific goal tree of several dimensions that are relevant for enterprise integration. These dimensions have to be weighed against each other and totalized using methods from the normative decision theory in order to reflect the intention of the particular enterprise integration effort. The indicators and measurements for many of the considered quality features rely on a model based view on systems, networks, and the enterprise. That means it is applicable to System-of-System specifications based on enterprise architectural frameworks relying on defined meta-models or domain ontologies for defining views and viewpoints. In the defense context we use the NATO Architecture Framework (NAF) to ground respective system models. The proposed assessment method allows evaluating and comparing competing system designs regarding their future integration potential. It is a contribution to the system-of-systems engineering methodology.

  16. The message architecture of the LEP control system

    Altaber, J.; van der Stok, P.; Frammery, V.; Gareyte, C.; Rausch, R.

    1985-01-01

    The LEP control system will be constructed as a global communication system where microprocessors will be used everywhere, from the management of the communication mechanisms, the execution of complex control procedures, and the supervision of the equipment. To achieve this, the global control problem has been cut into sizeable functions which will be encapsulated into microprocessor modules containing enough hardware for the function to be mostly self-contained. This leads to a function architecture where messages are exchanged between the functions on miscellaneous media. It is shown how these message exchanges can be organized into a uniform flow of data all through the system

  17. Scalable error correction in distributed ion trap computers

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  18. Multimedia architectures: from desktop systems to portable appliances

    Bhaskaran, Vasudev; Konstantinides, Konstantinos; Natarajan, Balas R.

    1997-01-01

    Future desktop and portable computing systems will have as their core an integrated multimedia system. Such a system will seamlessly combine digital video, digital audio, computer animation, text, and graphics. Furthermore, such a system will allow for mixed-media creation, dissemination, and interactive access in real time. Multimedia architectures that need to support these functions have traditionally required special display and processing units for the different media types. This approach tends to be expensive and is inefficient in its use of silicon. Furthermore, such media-specific processing units are unable to cope with the fluid nature of the multimedia market wherein the needs and standards are changing and system manufacturers may demand a single component media engine across a range of products. This constraint has led to a shift towards providing a single-component multimedia specific computing engine that can be integrated easily within desktop systems, tethered consumer appliances, or portable appliances. In this paper, we review some of the recent architectural efforts in developing integrated media systems. We primarily focus on two efforts, namely the evolution of multimedia-capable general purpose processors and a more recent effort in developing single component mixed media co-processors. Design considerations that could facilitate the migration of these technologies to a portable integrated media system also are presented.

  19. The Use of Open Source Software for Open Architecture System on CNC Milling Machine

    Dalmasius Ganjar Subagio

    2012-03-01

    Full Text Available Computer numerical control (CNC milling machine system cannot be separated from the software required to follow the provisions of the Open Architecture capabilities that have portability, extend ability, interoperability, and scalability. When a prescribed period of a CNC milling machine has passed and the manufacturer decided to discontinue it, then the user will have problems for maintaining the performance of the machine. This paper aims to show that the using of open source software (OSS is the way out to maintain engine performance. With the use of OSS, users no longer depend on the software built by the manufacturer because OSS is open and can be developed independently. In this paper, USBCNC V.3.42 is used as an alternative OSS. The test result shows that the work piece is in match with the desired pattern. The test result shows that the performance of machines using OSS has similar performance with the machine using software from the manufacturer. 

  20. Towards an inline reconstruction architecture for micro-CT systems

    Brasse, David; Humbert, Bernard; Mathelin, Carole; Rio, Marie-Christine; Guyonnet, Jean-Louis

    2005-01-01

    Recent developments in micro-CT have revolutionized the ability to examine in vivo living experimental animal models such as mouse with a spatial resolution less than 50 μm. The main requirements of in vivo imaging for biological researchers are a good spatial resolution, a low dose induced to the animal during the full examination and a reduced acquisition and reconstruction time for screening purposes. We introduce inline acquisition and reconstruction architecture to obtain in real time the 3D attenuation map of the animal fulfilling the three previous requirements. The micro-CT system is based on commercially available x-ray detector and micro-focus x-ray source. The reconstruction architecture is based on a cluster of PCs where a dedicated communication scheme combining serial and parallel treatments is implemented. In order to obtain high performance transmission rate between the detector and the reconstruction architecture, a dedicated data acquisition system is also developed. With the proposed solution, the time required to filter and backproject a projection of 2048 x 2048 pixels inside a volume of 140 mega voxels using the Feldkamp algorithm is similar to 500 ms, the time needed to acquire the same projection

  1. Development of the network architecture of the Canadian MSAT system

    Davies, N. George; Shoamanesh, Alireza; Leung, Victor C. M.

    1988-05-01

    A description is given of the present concept for the Canadian Mobile Satellite (MSAT) System and the development of the network architecture which will accommodate the planned family of three categories of service: a mobile radio service (MRS), a mobile telephone service (MTS), and a mobile data service (MDS). The MSAT satellite will have cross-strapped L-band and Ku-band transponders to provide communications services between L-band mobile terminals and fixed base stations supporting dispatcher-type MRS, gateway stations supporting MTS interconnections to the public telephone network, data hub stations supporting the MDS, and the network control center. The currently perceived centralized architecture with demand assignment multiple access for the circuit switched MRS, MTS and permanently assigned channels for the packet switched MDS is discussed.

  2. Scalable quantum memory in the ultrastrong coupling regime.

    Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C

    2015-03-02

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.

  3. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    Schuller, Ivan K. [Univ. of California, San Diego, CA (United States); Stevens, Rick [Argonne National Lab. (ANL), Argonne, IL (United States); Univ. of Chicago, IL (United States); Pino, Robinson [Dept. of Energy (DOE) Office of Science, Washington, DC (United States); Pechan, Michael [Dept. of Energy (DOE) Office of Science, Washington, DC (United States)

    2015-10-29

    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.

  4. A New Indoor Positioning System Architecture Using GPS Signals.

    Xu, Rui; Chen, Wu; Xu, Ying; Ji, Shengyue

    2015-04-29

    The pseudolite system is a good alternative for indoor positioning systems due to its large coverage area and accurate positioning solution. However, for common Global Positioning System (GPS) receivers, the pseudolite system requires some modifications of the user terminals. To solve the problem, this paper proposes a new pseudolite-based indoor positioning system architecture. The main idea is to receive real-world GPS signals, repeat each satellite signal and transmit those using indoor transmitting antennas. The transmitted GPS-like signal can be processed (signal acquisition and tracking, navigation data decoding) by the general receiver and thus no hardware-level modification on the receiver is required. In addition, all Tx can be synchronized with each other since one single clock is used in Rx/Tx. The proposed system is simulated using a software GPS receiver. The simulation results show the indoor positioning system is able to provide high accurate horizontal positioning in both static and dynamic situations.

  5. Information architecture considerations in designing a comprehensive tuberculosis enterprise system in Western Kenya.

    Gichoya, Judy; Pearce, Chris; Wickramasinghe, Nilmini

    2013-01-01

    Kenya ranks among the twenty-two countries that collectively contribute about 80% of the world's Tuberculosis cases; with a 50-200 fold increased risk of tuberculosis in HIV infected persons versus non-HIV hosts. Contemporaneously, there is an increase in mobile penetration and its use to support healthcare throughout Africa. Many are skeptical that such m-health solutions are unsustainable and not scalable. We seek to design a scalable, pervasive m-health solution for Tuberculosis care to become a use case for sustainable and scalable health IT in limited resource settings. We combine agile design principles and user-centered design to develop the architecture needed for this initiative. Furthermore, the architecture runs on multiple devices integrated to deliver functionality critical for successful Health IT implementation in limited resource settings. It is anticipated that once fully implemented, the proposed m-health solution will facilitate superior monitoring and management of Tuberculosis and thereby reduce the alarming statistic regarding this disease in this region.

  6. Advances in network systems architectures, security, and applications

    Awad, Ali; Furtak, Janusz; Legierski, Jarosław

    2017-01-01

    This book provides the reader with a comprehensive selection of cutting–edge algorithms, technologies, and applications. The volume offers new insights into a range of fundamentally important topics in network architectures, network security, and network applications. It serves as a reference for researchers and practitioners by featuring research contributions exemplifying research done in the field of network systems. In addition, the book highlights several key topics in both theoretical and practical aspects of networking. These include wireless sensor networks, performance of TCP connections in mobile networks, photonic data transport networks, security policies, credentials management, data encryption for network transmission, risk management, live TV services, and multicore energy harvesting in distributed systems. .

  7. Extension of an existing control and monitoring system: architecture 7

    Soulabaille, Y.

    1991-01-01

    Tore Supra Tokamak is controlled by Architecture 7. This system comprises 3 levels: Man-machine system, automatism management and exchanges with the plant. Performing it presents, nevertheless some limitations: time response is only half a second allowing to manage 95% of Tore Supra processes, the remaining 5% requires one millisecond. The first aim is the extension of functionalities by a fast automat giving one microsecond cycle. The fast automat is applied to the poloidal field. Of main concern for fusion experiments it allows the creation of a plasma current. The second aim is the possibility to use softwares found on the computer market [fr

  8. Power system data communication architecture at BC Hydro

    Struyk, E.

    2001-07-01

    Development of a power system data communication architecture (PSDCA) at British Columbia Hydro that enables authorized corporate users to access station-intelligent electronic devices (IEDs) for power system data in non real-time, without compromising the reliability and availability of the real-time SCADA systems, is described. Also discussed is the development of major upgrade initiatives for expanding the use of intelligent electronic devices and remote terminal units (RTUs) which report to the main System Control Centre at Burnaby, BC, and to the four Area Control Centres located throughout the province. The network architecture that incorporates industry standards for PSDCA also provides an opportunity to existing network security systems against electronic threats such as hackers and saboteurs, beyond the simple methods of single or two-level passwords of existing protection control and monitoring equipment systems. The virtual private network (VPN) technology built into the PSDCA will allow secure access to station IED data by corporate users to access their own power data in a secure and reliable fashion. 4 figs.

  9. Integrated control and diagnostic system architectures for future installations

    Wood, R.; March-Leuba, J.

    2000-01-01

    Nuclear reactors of the 21st century will employ increasing levels of automation and fault tolerance to increase availability, reduce accident risk, and lower operating costs. Key developments in control algorithms, fault diagnostics, fault tolerance, and distributed communications are needed to implement the fully automated plant. It will be equally challenging to integrate developments in separate information and control fields into a cohesive system, which collectively achieves the overall goals of improved safety, reliability, maintainability, and cost-effectiveness. Under the Nuclear Energy Research Initiative (NERI), the US Department of Energy is sponsoring a project to address some of the technical issues involved in meeting the long-range goal of 21st century reactor control systems. This project involves researchers from Oak Ridge National Laboratory, the University of Tennessee, and North Carolina State University. The research tasks under this project focus on some of the first-level breakthroughs in control design, diagnostic techniques, and information system design that will provide a path to enable the design process to be automated in the future. This paper describes the conceptual development of an integrated nuclear plant control and information system architecture, which incorporates automated control system development that can be traced to a set of technical requirements. The expectation is that an integrated plant architecture with optimal control and efficient use of diagnostic information can reduce the potential for operational errors and minimize challenges to the plant safety systems

  10. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    Kishore R. Mosaliganti

    2013-12-01

    Full Text Available In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse and grid representations (point, mesh, and image-based. Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g. gradient and Hessians across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a

  11. Open architecture for health care systems: the European RICHE experience.

    Frandji, B

    1997-01-01

    Groupe RICHE is bringing to the market of health IT the Open Systems approach allowing a new generation of health information systems to arise with benefit for patients, health care professionals, hospital managers, agencies and citizens. Groupe RICHE is a forum for exchanging information, expertise around open systems in health care. It is open to any organisation interested by open systems in health care and wanting to participate and influence the work done by its user, marketing and technical committees. The Technical Committee is in charge of the maintenance of the architecture and impact the results of industrial experiences on new releases. Any Groupe RICHE member is entitled to participate to this process. This unique approach in Europe allows health care professionals to benefit from applications supporting their business processes, including providing a cooperative working environment, a shared electronic record, in an integrated system where the information is entered only once, customised according to the user needs and available to the administrative applications. This allows Hospital managers to satisfy their health care professionals, to smoothly migrate from their existing environment (protecting their investment), to choose products in a competitive environment, being able to mix and match system components and services from different suppliers, being free to change suppliers without having to replace their existing system (minimising risk), in line with national and regional strategies. For suppliers, this means being able to commercialise products well fitted to their field of competence in a large market, reducing investments and increasing returns. The RICHE approach also allows agencies to define a strategy, allowing to create a supporting infrastructure, organising the market leaving enough freedom to health care organisations and suppliers. Such an approach is based on the definition of an open standard architecture. The RICHE esprit project

  12. An Architecture for Cross-Cloud System Management

    Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad

    The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.

  13. Challenges in the Development and Evolution of Secure Open Architecture Command and Control Systems (Briefing Charts)

    2013-06-01

    widgets for an OA system Design-time architecture: Browser, email, widget, DB, OS Go ogle Instance architecture: Chrome, Gmail, Google...provides functionally similar components or applications compatible with an OA system design Firefox Browser, WP, calendar Opera Instance...architecture: Firefox , AbiWord, Evolution, Fedora GPL Ab1Word Google Docs Instance ardlitecture: Fire fox, OR Google cal., Google Docs, Fedora

  14. Scalable optical switches for computing applications

    White, I.H.; Aw, E.T.; Williams, K.A.; Wang, Haibo; Wonfor, A.; Penty, R.V.

    2009-01-01

    A scalable photonic interconnection network architecture is proposed whereby a Clos network is populated with broadcast-and-select stages. This enables the efficient exploitation of an emerging class of photonic integrated switch fabric. A low distortion space switch technology based on recently

  15. Schema architecture and their relationships to transaction processing in distributed database systems

    Apers, Peter M.G.; Scheuermann, P.

    1991-01-01

    We discuss the different types of schema architectures which could be supported by distributed database systems, making a clear distinction between logical, physical, and federated distribution. We elaborate on the additional mapping information required in architecture based on logical distribution

  16. Developing an intelligent transportation systems (ITS) architecture for the KIPDA region : final report.

    2004-08-01

    This report describes the development of a regional Intelligent Transportation Systems (ITS) Architecture for the five-county urban area under the auspices of the Kentuckiana Regional Planning and Development Agency (KIPDA). The architecture developm...

  17. Pilot factory - a Condor-based system for scalable Pilot Job generation in the Panda WMS framework

    Chiu, Po-Hsiang; Potekhin, Maxim

    2010-01-01

    The Panda Workload Management System is designed around the concept of the Pilot Job - a 'smart wrapper' for the payload executable that can probe the environment on the remote worker node before pulling down the payload from the server and executing it. Such design allows for improved logging and monitoring capabilities as well as flexibility in Workload Management. In the Grid environment (such as the Open Science Grid), Panda Pilot Jobs are submitted to remote sites via mechanisms that ultimately rely on Condor-G. As our experience has shown, in cases where a large number of Panda jobs are simultaneously routed to a particular remote site, the increased load on the head node of the cluster, which is caused by the Pilot Job submission, may lead to overall lack of scalability. We have developed a Condor-inspired solution to this problem, which is using the schedd-based glidein, whose mission is to redirect pilots to the native batch system. Once a glidein schedd is installed and running, it can be utilized exactly the same way as local schedds and therefore, from the user's perspective, Pilots thus submitted are quite similar to jobs submitted to the local Condor pool.

  18. A development of digital plant protection system architecture

    Seong, S. H.; Park, H. Y.; Kim, D. H.; Seo, Y. S.; Gu, I. S.

    2000-01-01

    The digital plant protection system (DPPS) which have a large number of advantages compared to current analog protection system has been developed in various field. The major disadvantages of digital system are, however, vulnerable to faults of processor and software. To overcome the disadvantages, the concept of segment and partition in a channel has been developed. Each segment in a channel is divided from sensor to reactor trip and engineered safety features, which is based on the functional diversity of input signals against the various plant transient phenomena. Each partition allocates the function module to an independent processing module in order to process and isolate the faults of each module of a segment. A communication system based on the deterministic protocol with the predictable and hard real-time characteristics has been developed in order to link the various modules within a segment. The self-diagnostics including online test and periodic test procedures are developed in order to increase the safety, reliability and availability of DPPS. The developed DPPS uses the off-the-shelf DSP (digital signal processor) and adopts VME bus architecture, which have sufficient operation experience in the industry. The verification and validation and quality assurance of software has been developed and the architecture and protocol of deterministic communication system has been researched

  19. Instrumentation Standard Architectures for Future High Availability Control Systems

    Larsen, R.S.

    2005-01-01

    Architectures for next-generation modular instrumentation standards should aim to meet a requirement of High Availability, or robustness against system failure. This is particularly important for experiments both large and small mounted on production accelerators and light sources. New standards should be based on architectures that (1) are modular in both hardware and software for ease in repair and upgrade; (2) include inherent redundancy at internal module, module assembly and system levels; (3) include modern high speed serial inter-module communications with robust noise-immune protocols; and (4) include highly intelligent diagnostics and board-management subsystems that can predict impending failure and invoke evasive strategies. The simple design principles lead to fail-soft systems that can be applied to any type of electronics system, from modular instruments to large power supplies to pulsed power modulators to entire accelerator systems. The existing standards in use are briefly reviewed and compared against a new commercial standard which suggests a powerful model for future laboratory standard developments. The past successes of undertaking such projects through inter-laboratory engineering-physics collaborations will be briefly summarized

  20. System Engineering Strategy for Distributed Multi-Purpose Simulation Architectures

    Bhula, Dlilpkumar; Kurt, Cindy Marie; Luty, Roger

    2007-01-01

    This paper describes the system engineering approach used to develop distributed multi-purpose simulations. The multi-purpose simulation architecture focuses on user needs, operations, flexibility, cost and maintenance. This approach was used to develop an International Space Station (ISS) simulator, which is called the International Space Station Integrated Simulation (ISIS)1. The ISIS runs unmodified ISS flight software, system models, and the astronaut command and control interface in an open system design that allows for rapid integration of multiple ISS models. The initial intent of ISIS was to provide a distributed system that allows access to ISS flight software and models for the creation, test, and validation of crew and ground controller procedures. This capability reduces the cost and scheduling issues associated with utilizing standalone simulators in fixed locations, and facilitates discovering unknowns and errors earlier in the development lifecycle. Since its inception, the flexible architecture of the ISIS has allowed its purpose to evolve to include ground operator system and display training, flight software modification testing, and as a realistic test bed for Exploration automation technology research and development.

  1. An architecture model for multiple disease management information systems.

    Chen, Lichin; Yu, Hui-Chu; Li, Hao-Chun; Wang, Yi-Van; Chen, Huang-Jen; Wang, I-Ching; Wang, Chiou-Shiang; Peng, Hui-Yu; Hsu, Yu-Ling; Chen, Chi-Huang; Chuang, Lee-Ming; Lee, Hung-Chang; Chung, Yufang; Lai, Feipei

    2013-04-01

    Disease management is a program which attempts to overcome the fragmentation of healthcare system and improve the quality of care. Many studies have proven the effectiveness of disease management. However, the case managers were spending the majority of time in documentation, coordinating the members of the care team. They need a tool to support them with daily practice and optimizing the inefficient workflow. Several discussions have indicated that information technology plays an important role in the era of disease management. Whereas applications have been developed, it is inefficient to develop information system for each disease management program individually. The aim of this research is to support the work of disease management, reform the inefficient workflow, and propose an architecture model that enhance on the reusability and time saving of information system development. The proposed architecture model had been successfully implemented into two disease management information system, and the result was evaluated through reusability analysis, time consumed analysis, pre- and post-implement workflow analysis, and user questionnaire survey. The reusability of the proposed model was high, less than half of the time was consumed, and the workflow had been improved. The overall user aspect is positive. The supportiveness during daily workflow is high. The system empowers the case managers with better information and leads to better decision making.

  2. Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings

    Johnson, Brian B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Lin, Yashen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gevorgian, Vahan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Purba, Victor [University of Minnesota; Dhople, Sairaj [University of Minnesota

    2017-09-28

    From the inception of power systems, synchronous machines have acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, power electronics interfaces are playing a growing role as they are the primary interface for several types of renewable energy sources and storage technologies. As the role of power electronics in systems continues to grow, it is crucial to investigate the properties of bulk power systems in low inertia settings. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator, three-phase inverter, and a load. Furthermore, the inverter model is formulated such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings and, hence, differing levels of inertia. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the interaction between the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.

  3. 36th International Conference on Information Systems Architecture and Technology

    Grzech, Adam; Świątek, Jerzy; Wilimowska, Zofia

    2016-01-01

    This four volume set of books constitutes the proceedings of the 36th International Conference Information Systems Architecture and Technology 2015, or ISAT 2015 for short, held on September 20–22, 2015 in Karpacz, Poland. The conference was organized by the Computer Science and Management Systems Departments, Faculty of Computer Science and Management, Wroclaw University of Technology, Poland. The papers included in the proceedings have been subject to a thorough review process by highly qualified peer reviewers. The accepted papers have been grouped into four parts: Part I—addressing topics including, but not limited to, systems analysis and modeling, methods for managing complex planning environment and insights from Big Data research projects. Part II—discoursing about topics including, but not limited to, Web systems, computer networks, distributed computing, and multi-agent systems and Internet of Things. Part III—discussing topics including, but not limited to, mobile and Service Oriented Archi...

  4. 2016 37th International Conference Information Systems Architecture and Technology

    Grzech, Adam; Świątek, Jerzy; Wilimowska, Zofia

    2017-01-01

    This four volume set of books constitutes the proceedings of the 2016 37th International Conference Information Systems Architecture and Technology (ISAT), or ISAT 2016 for short, held on September 18–20, 2016 in Karpacz, Poland. The conference was organized by the Department of Management Systems and the Department of Computer Science, Wrocław University of Science and Technology, Poland. The papers included in the proceedings have been subject to a thorough review process by highly qualified peer reviewers. The accepted papers have been grouped into four parts: Part I—addressing topics including, but not limited to, systems analysis and modeling, methods for managing complex planning environment and insights from Big Data research projects. Part II—discoursing about topics including, but not limited to, Web systems, computer networks, distributed computing, and mulit-agent systems and Internet of Things. Part III—discussing topics including, but not limited to, mobile and Service Oriented Architect...

  5. Architectural slicing

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...... of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given...... a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java....

  6. ACME: A scalable parallel system for extracting frequent patterns from a very long sequence

    Sahli, Majed; Mansour, Essam; Kalnis, Panos

    2014-01-01

    -long sequences and is the first to support supermaximal motifs. ACME is a versatile parallel system that can be deployed on desktop multi-core systems, or on thousands of CPUs in the cloud. However, merely using more compute nodes does not guarantee efficiency

  7. Measuring whole-plant transpiration gravimetrically: a scalable automated system built from components

    Damian Cirelli; Victor J. Lieffers; Melvin T. Tyree

    2012-01-01

    Measuring whole-plant transpiration is highly relevant considering the increasing interest in understanding and improving plant water use at the whole-plant level. We present an original software package (Amalthea) and a design to create a system for measuring transpiration using laboratory balances based on the readily available commodity hardware. The system is...

  8. Architectural elements of hybrid navigation systems for future space transportation

    Trigo, Guilherme F.; Theil, Stephan

    2017-12-01

    The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.

  9. Component-Level Electronic-Assembly Repair (CLEAR) System Architecture

    Oeftering, Richard C.; Bradish, Martin A.; Juergens, Jeffrey R.; Lewis, Michael J.; Vrnak, Daniel R.

    2011-01-01

    This document captures the system architecture for a Component-Level Electronic-Assembly Repair (CLEAR) capability needed for electronics maintenance and repair of the Constellation Program (CxP). CLEAR is intended to improve flight system supportability and reduce the mass of spares required to maintain the electronics of human rated spacecraft on long duration missions. By necessity it allows the crew to make repairs that would otherwise be performed by Earth based repair depots. Because of practical knowledge and skill limitations of small spaceflight crews they must be augmented by Earth based support crews and automated repair equipment. This system architecture covers the complete system from ground-user to flight hardware and flight crew and defines an Earth segment and a Space segment. The Earth Segment involves database management, operational planning, and remote equipment programming and validation processes. The Space Segment involves the automated diagnostic, test and repair equipment required for a complete repair process. This document defines three major subsystems including, tele-operations that links the flight hardware to ground support, highly reconfigurable diagnostics and test instruments, and a CLEAR Repair Apparatus that automates the physical repair process.

  10. RoboSmith: Wireless Networked Architecture for Multiagent Robotic System

    Florin Moldoveanu

    2010-11-01

    Full Text Available In this paper is presented an architecture for a flexible mini robot for a multiagent robotic system. In a multiagent system the value of an individual agent is negligible since the goal of the system is essential. Thus, the agents (robots need to be small, low cost and cooperative. RoboSmith are designed based on these conditions. The proposed architecture divide a robot into functional modules such as locomotion, control, sensors, communication, and actuation. Any mobile robot can be constructed by combining these functional modules for a specific application. An embedded software with dynamic task uploading and multi-tasking abilities is developed in order to create better interface between robots and the command center and among the robots. The dynamic task uploading allows the robots change their behaviors in runtime. The flexibility of the robots is given by facts that the robots can work in multiagent system, as master-slave, or hybrid mode, can be equipped with different modules and possibly be used in other applications such as mobile sensor networks remote sensing, and plant monitoring.

  11. Control system devices : architectures and supply channels overview.

    Trent, Jason; Atkins, William Dee; Schwartz, Moses Daniel; Mulder, John C.

    2010-08-01

    This report describes a research project to examine the hardware used in automated control systems like those that control the electric grid. This report provides an overview of the vendors, architectures, and supply channels for a number of control system devices. The research itself represents an attempt to probe more deeply into the area of programmable logic controllers (PLCs) - the specialized digital computers that control individual processes within supervisory control and data acquisition (SCADA) systems. The report (1) provides an overview of control system networks and PLC architecture, (2) furnishes profiles for the top eight vendors in the PLC industry, (3) discusses the communications protocols used in different industries, and (4) analyzes the hardware used in several PLC devices. As part of the project, several PLCs were disassembled to identify constituent components. That information will direct the next step of the research, which will greatly increase our understanding of PLC security in both the hardware and software areas. Such an understanding is vital for discerning the potential national security impact of security flaws in these devices, as well as for developing proactive countermeasures.

  12. Advanced Exploration Systems Water Architecture Study Interim Results

    Sargusingh, Miriam J.

    2013-01-01

    The mission of the Advanced Exploration System (AES) Water Recovery Project (WRP) is to develop advanced water recovery systems that enable NASA human exploration missions beyond low Earth orbit (LEO). The primary objective of the AES WRP is to develop water recovery technologies critical to near-term missions beyond LEO. The secondary objective is to continue to advance mid-readiness-level technologies to support future NASA missions. An effort is being undertaken to establish the architecture for the AES Water Recovery System (WRS) that meets both near- and long-term objectives. The resultant architecture will be used to guide future technical planning, establish a baseline development roadmap for technology infusion, and establish baseline assumptions for integrated ground and on-orbit Environmental Control and Life Support Systems definition. This study is being performed in three phases. Phase I established the scope of the study through definition of the mission requirements and constraints, as well as identifying all possible WRS configurations that meet the mission requirements. Phase II focused on the near-term space exploration objectives by establishing an International Space Station-derived reference schematic for long-duration (>180 day) in-space habitation. Phase III will focus on the long-term space exploration objectives, trading the viable WRS configurations identified in Phase I to identify the ideal exploration WRS. The results of Phases I and II are discussed in this paper.

  13. Architectural elements of hybrid navigation systems for future space transportation

    Trigo, Guilherme F.; Theil, Stephan

    2018-06-01

    The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.

  14. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    Iacobucci, Joseph V.

    The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular

  15. Parallelism and Scalability in an Image Processing Application

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  16. Parallelism and Scalability in an Image Processing Application

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  17. A highly scalable information system as extendable framework solution for medical R&D projects.

    Holzmüller-Laue, Silke; Göde, Bernd; Stoll, Regina; Thurow, Kerstin

    2009-01-01

    For research projects in preventive medicine a flexible information management is needed that offers a free planning and documentation of project specific examinations. The system should allow a simple, preferably automated data acquisition from several distributed sources (e.g., mobile sensors, stationary diagnostic systems, questionnaires, manual inputs) as well as an effective data management, data use and analysis. An information system fulfilling these requirements has been developed at the Center for Life Science Automation (celisca). This system combines data of multiple investigations and multiple devices and displays them on a single screen. The integration of mobile sensor systems for comfortable, location-independent capture of time-based physiological parameter and the possibility of observation of these measurements directly by this system allow new scenarios. The web-based information system presented in this paper is configurable by user interfaces. It covers medical process descriptions, operative process data visualizations, a user-friendly process data processing, modern online interfaces (data bases, web services, XML) as well as a comfortable support of extended data analysis with third-party applications.

  18. A Microwave Photonic Interference Canceller: Architectures, Systems, and Integration

    Chang, Matthew P.

    This thesis is a comprehensive portfolio of work on a Microwave Photonic Self-Interference Canceller (MPC), a specialized optical system designed to eliminate interference from radio-frequency (RF) receivers. The novelty and value of the microwave photonic system lies in its ability to operate over bandwidths and frequencies that are orders of magnitude larger than what is possible using existing RF technology. The work begins, in 2012, with a discrete fiber-optic microwave photonic canceller, which prior work had demonstrated as a proof-of-concept, and culminates, in 2017, with the first ever monolithically integrated microwave photonic canceller. With an eye towards practical implementation, the thesis establishes novelty through three major project thrusts. (Fig. 1): (1) Extensive RF and system analysis to develop a full understanding of how, and through what mechanisms, MPCs affect an RF receiver. The first investigations of how a microwave photonic canceller performs in an actual wireless environment and a digital radio are also presented. (2) New architectures to improve the performance and functionality of MPCs, based on the analysis performed in Thrust 1. A novel balanced microwave photonic canceller architecture is developed and experimentally demonstrated. The balanced architecture shows significant improvements in link gain, noise figure, and dynamic range. Its main advantage is its ability to suppress common-mode noise and reduce noise figure by increasing the optical power. (3) Monolithic integration of the microwave photonic canceller into a photonic integrated circuit. This thrust presents the progression of integrating individual discrete devices into their semiconductor equivalent, as well as a full functional and RF analysis of the first ever integrated microwave photonic canceller.

  19. Architecture of high reliable control systems using complex software

    Tallec, M.

    1990-01-01

    The problems involved by the use of complex softwares in control systems that must insure a very high level of safety are examined. The first part makes a brief description of the prototype of PROSPER system. PROSPER means protection system for nuclear reactor with high performances. It has been installed on a French nuclear power plant at the beginnning of 1987 and has been continually working since that time. This prototype is realized on a multi-processors system. The processors communicate between themselves using interruptions and protected shared memories. On each processor, one or more protection algorithms are implemented. Those algorithms use data coming directly from the plant and, eventually, data computed by the other protection algorithms. Each processor makes its own acquisitions from the process and sends warning messages if some operating anomaly is detected. All algorithms are activated concurrently on an asynchronous way. The results are presented and the safety related problems are detailed. - The second part is about measurements' validation. First, we describe how the sensors' measurements will be used in a protection system. Then, a proposal for a method based on the techniques of artificial intelligence (expert systems and neural networks) is presented. - The last part is about the problems of architectures of systems including hardware and software: the different types of redundancies used till now and a proposition of a multi-processors architecture which uses an operating system that is able to manage several tasks implemented on different processors, which verifies the good operating of each of those tasks and of the related processors and which allows to carry on the operation of the system, even in a degraded manner when a failure has been detected are detailed [fr

  20. Framework for developing a regional system architecture for intelligent transportation systems

    1997-01-01

    Defining an architecture for intelligent transportation systems (ITS) at the regional level, where most ITS deployment occurs, is constrained by jurisdictional, institutional, financial, political, and regulatory factors. These constraints provide op...

  1. Zero Suppression with Scalable Readout System (SRS) and APV25 FE Chip

    Goentoro, Steven Lukas

    2015-01-01

    Zero suppression is a very useful algorithm in data acquisition and transfer. In this report, I would like to present the basic procedures of the application of Zero Suppression in the ordinary DAQ system that we have ( Date and Amore)

  2. A Scalable Semantics-Based Verification System for Flight Critical Software, Phase II

    National Aeronautics and Space Administration — Flight-critical systems rely on an ever increasing amount of software—the Boe- ing 777 contains over 2 million lines of code. Most of this code is written in the C...

  3. System Architecture and Mobility Management for Mobile Immersive Communications

    Mehran Dowlatshahi

    2007-01-01

    Full Text Available We propose a system design for delivery of immersive communications to mobile wireless devices based on a distributed proxy model. It is demonstrated that this architecture addresses key technical challenges for the delivery of these services, that is, constraints on link capacity and power consumption in mobile devices. However, additional complexity is introduced with respect to application layer mobility management. The paper proposes three possible methods for updating proxy assignments in response to mobility management and compares the performance of these methods.

  4. Indoor and Outdoor Mobile Mapping Systems for Architectural Surveys

    Campi, M.; di Luggo, A.; Monaco, S.; Siconolfi, M.; Palomba, D.

    2018-05-01

    This paper presents the results of architectural surveys carried out with mobile mapping systems. The data acquired through different instruments for both indoor and outdoor surveying are analyzed and compared. The study sample shows what is required for an acquisition in a dynamic mode indicating the criteria for the creation of a georeferenced network for indoor spaces, as well as the operational processes concerning data capture, processing, and management. The differences between a dynamic and static scan have been evaluated, with a comparison being made with the aerial photogrammetric survey of the same sample.

  5. System on chip module configured for event-driven architecture

    Robbins, Kevin; Brady, Charles E.; Ashlock, Tad A.

    2017-10-17

    A system on chip (SoC) module is described herein, wherein the SoC modules comprise a processor subsystem and a hardware logic subsystem. The processor subsystem and hardware logic subsystem are in communication with one another, and transmit event messages between one another. The processor subsystem executes software actors, while the hardware logic subsystem includes hardware actors, the software actors and hardware actors conform to an event-driven architecture, such that the software actors receive and generate event messages and the hardware actors receive and generate event messages.

  6. Scalable High-Performance Parallel Design for Network Intrusion Detection Systems on Many-Core Processors

    Jiang, Hayang; Xie, Gaogang; Salamatian, Kavé; Mathy, Laurent

    2013-01-01

    Network Intrusion Detection Systems (NIDSes) face significant challenges coming from the relentless network link speed growth and increasing complexity of threats. Both hardware accelerated and parallel software-based NIDS solutions, based on commodity multi-core and GPU processors, have been proposed to overcome these challenges. Network Intrusion Detection Systems (NIDSes) face significant challenges coming from the relentless network link speed growth and increasing complexity of threats. ...

  7. Investigating the Role of Biogeochemical Processes in the Northern High Latitudes on Global Climate Feedbacks Using an Efficient Scalable Earth System Model

    Jain, Atul K. [Univ. of Illinois, Urbana-Champaign, IL (United States)

    2016-09-14

    The overall objectives of this DOE funded project is to combine scientific and computational challenges in climate modeling by expanding our understanding of the biogeophysical-biogeochemical processes and their interactions in the northern high latitudes (NHLs) using an earth system modeling (ESM) approach, and by adopting an adaptive parallel runtime system in an ESM to achieve efficient and scalable climate simulations through improved load balancing algorithms.

  8. A proposed scalable design and simulation of wireless sensor network-based long-distance water pipeline leakage monitoring system.

    Almazyad, Abdulaziz S; Seddiq, Yasser M; Alotaibi, Ahmed M; Al-Nasheri, Ahmed Y; BenSaleh, Mohammed S; Obeid, Abdulfattah M; Qasim, Syed Manzoor

    2014-02-20

    Anomalies such as leakage and bursts in water pipelines have severe consequences for the environment and the economy. To ensure the reliability of water pipelines, they must be monitored effectively. Wireless Sensor Networks (WSNs) have emerged as an effective technology for monitoring critical infrastructure such as water, oil and gas pipelines. In this paper, we present a scalable design and simulation of a water pipeline leakage monitoring system using Radio Frequency IDentification (RFID) and WSN technology. The proposed design targets long-distance aboveground water pipelines that have special considerations for maintenance, energy consumption and cost. The design is based on deploying a group of mobile wireless sensor nodes inside the pipeline and allowing them to work cooperatively according to a prescheduled order. Under this mechanism, only one node is active at a time, while the other nodes are sleeping. The node whose turn is next wakes up according to one of three wakeup techniques: location-based, time-based and interrupt-driven. In this paper, mathematical models are derived for each technique to estimate the corresponding energy consumption and memory size requirements. The proposed equations are analyzed and the results are validated using simulation.

  9. Development and Validation of a Scalable Next-Generation Sequencing System for Assessing Relevant Somatic Variants in Solid Tumors12

    Hovelson, Daniel H.; McDaniel, Andrew S.; Cani, Andi K.; Johnson, Bryan; Rhodes, Kate; Williams, Paul D.; Bandla, Santhoshi; Bien, Geoffrey; Choppa, Paul; Hyland, Fiona; Gottimukkala, Rajesh; Liu, Guoying; Manivannan, Manimozhi; Schageman, Jeoffrey; Ballesteros-Villagrana, Efren; Grasso, Catherine S.; Quist, Michael J.; Yadati, Venkata; Amin, Anmol; Siddiqui, Javed; Betz, Bryan L.; Knudsen, Karen E.; Cooney, Kathleen A.; Feng, Felix Y.; Roh, Michael H.; Nelson, Peter S.; Liu, Chia-Jen; Beer, David G.; Wyngaard, Peter; Chinnaiyan, Arul M.; Sadis, Seth; Rhodes, Daniel R.; Tomlins, Scott A.

    2015-01-01

    Next-generation sequencing (NGS) has enabled genome-wide personalized oncology efforts at centers and companies with the specialty expertise and infrastructure required to identify and prioritize actionable variants. Such approaches are not scalable, preventing widespread adoption. Likewise, most targeted NGS approaches fail to assess key relevant genomic alteration classes. To address these challenges, we predefined the catalog of relevant solid tumor somatic genome variants (gain-of-function or loss-of-function mutations, high-level copy number alterations, and gene fusions) through comprehensive bioinformatics analysis of >700,000 samples. To detect these variants, we developed the Oncomine Comprehensive Panel (OCP), an integrative NGS-based assay [compatible with 95% accuracy for KRAS, epidermal growth factor receptor, and BRAF mutation detection as well as for ALK and TMPRSS2:ERG gene fusions. Associating positive variants with potential targeted treatments demonstrated that 6% to 42% of profiled samples (depending on cancer type) harbored alterations beyond routine molecular testing that were associated with approved or guideline-referenced therapies. As a translational research tool, OCP identified adaptive CTNNB1 amplifications/mutations in treated prostate cancers. Through predefining somatic variants in solid tumors and compiling associated potential treatment strategies, OCP represents a simplified, broadly applicable targeted NGS system with the potential to advance precision oncology efforts. PMID:25925381

  10. A Proposed Scalable Design and Simulation of Wireless Sensor Network-Based Long-Distance Water Pipeline Leakage Monitoring System

    Abdulaziz S. Almazyad

    2014-02-01

    Full Text Available Anomalies such as leakage and bursts in water pipelines have severe consequences for the environment and the economy. To ensure the reliability of water pipelines, they must be monitored effectively. Wireless Sensor Networks (WSNs have emerged as an effective technology for monitoring critical infrastructure such as water, oil and gas pipelines. In this paper, we present a scalable design and simulation of a water pipeline leakage monitoring system using Radio Frequency IDentification (RFID and WSN technology. The proposed design targets long-distance aboveground water pipelines that have special considerations for maintenance, energy consumption and cost. The design is based on deploying a group of mobile wireless sensor nodes inside the pipeline and allowing them to work cooperatively according to a prescheduled order. Under this mechanism, only one node is active at a time, while the other nodes are sleeping. The node whose turn is next wakes up according to one of three wakeup techniques: location-based, time-based and interrupt-driven. In this paper, mathematical models are derived for each technique to estimate the corresponding energy consumption and memory size requirements. The proposed equations are analyzed and the results are validated using simulation.

  11. Critical early mission design considerations for lunar data systems architecture

    Hei, Donald J., Jr.; Stephens, Elaine

    1992-01-01

    This paper outlines recent early mission design activites for a lunar data systems architecture. Each major functional element is shown to be strikingly similar when viewed in a common reference system. While this similarity probably deviates with lower levels of decomposition, the sub-functions can always be arranged into similar and dissimilar categories. Similar functions can be implemented as objects - implemented once and reused several times like today's advanced integrated circuits. This approach to mission data systems, applied to other NASA programs, may result in substantial agency implementation and maintenance savings. In today's zero-sum-game budgetary environment, this approach could help to enable a lunar exploration program in the next decade. Several early mission studies leading to such an object-oriented data systems design are recommended.

  12. Computing Architecture of the ALICE Detector Control System

    Augustinus, A; Moreno, A; Kurepin, A N; De Cataldo, G; Pinazza, O; Rosinský, P; Lechman, M; Jirdén, L S

    2011-01-01

    The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.

  13. Considering Intermittent Dormancy in an Advanced Life Support Systems Architecture

    Sargusingh, Miriam J.; Perry, Jay L.

    2017-01-01

    Many advanced human space exploration missions being considered by the National Aeronautics and Space Administration (NASA) include concepts in which in-space systems cycle between inhabited and uninhabited states. Managing the life support system (LSS) may be particularly challenged during these periods of intermittent dormancy. A study to identify LSS management challenges and considerations relating to dormancy is described. The study seeks to define concepts suitable for addressing intermittent dormancy states and to evaluate whether the reference LSS architectures being considered by the Advanced Exploration Systems (AES) Life Support Systems Project (LSSP) are sufficient to support this operational state. The primary focus of the study is the mission concept considered to be the most challenging-a crewed Mars mission with an extensive surface stay. Results from this study are presented and discussed.

  14. Architecture and Fault Identification of Wide-area Protection System

    Yuxue Wang

    2012-09-01

    Full Text Available Wide-area protection system (WAPS is widely studied for the purpose of improvng the performance of conventional backup protection. In this paper, the system architecture of WAPS is proposed and its key technologies are discussed in view of engineering projects. So a mixed structurecentralized-distributed structure which is more suitable for WAPS in limited power grid region, is obtained based on the advantages of the centralized structure and distributed structure. Furthermore, regional distance protection algorithm was taken as an example to illustrate the functions of the constituent units. Faulted components can be detected based on multi-source imformation fuse in the algorithm. And the algorithm cannot only improve the selectivity, the rapidity, and the reliability of relaying protection but also has high fault tolerant capability. A simulation of 220 kV grid systems in Easter Hubei province shows the effectiveness of the wide-area protection system presented by this paper.

  15. SCOS 2: A distributed architecture for ground system control

    Keyte, Karl P.

    The current generation of spacecraft ground control systems in use at the European Space Agency/European Space Operations Centre (ESA/ESOC) is based on the SCOS 1. Such systems have become difficult to manage in both functional and financial terms. The next generation of spacecraft is demanding more flexibility in the use, configuration and distribution of control facilities as well as functional requirements capable of matching those being planned for future missions. SCOS 2 is more than a successor to SCOS 1. Many of the shortcomings of the existing system have been carefully analyzed by user and technical communities and a complete redesign was made. Different technologies were used in many areas including hardware platform, network architecture, user interfaces and implementation techniques, methodologies and language. As far as possible a flexible design approach has been made using popular industry standards to provide vendor independence in both hardware and software areas. This paper describes many of the new approaches made in the architectural design of the SCOS 2.

  16. Designing for scale: optimising the health information system architecture for mobile maternal health messaging in South Africa (MomConnect).

    Seebregts, Christopher; Dane, Pierre; Parsons, Annie Neo; Fogwill, Thomas; Rogers, Debbie; Bekker, Marcha; Shaw, Vincent; Barron, Peter

    2018-01-01

    MomConnect is a national initiative coordinated by the South African National Department of Health that sends text-based mobile phone messages free of charge to pregnant women who voluntarily register at any public healthcare facility in South Africa. We describe the system design and architecture of the MomConnect technical platform, planned as a nationally scalable and extensible initiative. It uses a health information exchange that can connect any standards-compliant electronic front-end application to any standards-compliant electronic back-end database. The implementation of the MomConnect technical platform, in turn, is a national reference application for electronic interoperability in line with the South African National Health Normative Standards Framework. The use of open content and messaging standards enables the architecture to include any application adhering to the selected standards. Its national implementation at scale demonstrates both the use of this technology and a key objective of global health information systems, which is to achieve implementation scale. The system's limited clinical information, initially, allowed the architecture to focus on the base standards and profiles for interoperability in a resource-constrained environment with limited connectivity and infrastructural capacity. Maintenance of the system requires mobilisation of national resources. Future work aims to use the standard interfaces to include data from additional applications as well as to extend and interface the framework with other public health information systems in South Africa. The development of this platform has also shown the benefits of interoperability at both an organisational and technical level in South Africa.

  17. The diversity of planetary system architectures: contrasting theory with observations

    Miguel, Y.; Guilera, O. M.; Brunini, A.

    2011-10-01

    In order to explain the observed diversity of planetary system architectures and relate this primordial diversity to the initial properties of the discs where they were born, we develop a semi-analytical model for computing planetary system formation. The model is based on the core instability model for the gas accretion of the embryos and the oligarchic growth regime for the accretion of the solid cores. Two regimes of planetary migration are also included. With this model, we consider different initial conditions based on recent results of protoplanetary disc observations to generate a variety of planetary systems. These systems are analysed statistically, exploring the importance of several factors that define the planetary system birth environment. We explore the relevance of the mass and size of the disc, metallicity, mass of the central star and time-scale of gaseous disc dissipation in defining the architecture of the planetary system. We also test different values of some key parameters of our model to find out which factors best reproduce the diverse sample of observed planetary systems. We assume different migration rates and initial disc profiles, in the context of a surface density profile motivated by similarity solutions. According to this, and based on recent protoplanetary disc observational data, we predict which systems are the most common in the solar neighbourhood. We intend to unveil whether our Solar system is a rarity or whether more planetary systems like our own are expected to be found in the near future. We also analyse which is the more favourable environment for the formation of habitable planets. Our results show that planetary systems with only terrestrial planets are the most common, being the only planetary systems formed when considering low-metallicity discs, which also represent the best environment for the development of rocky, potentially habitable planets. We also found that planetary systems like our own are not rare in the

  18. Dynamic pricing by scalable energy management systems Field experiences and simulation results using PowerMatcher

    Kok, J.K.; Roossien, B.; MacDougall, P.; Pruissen, O. van; Venekamp, G.; Kamphuis, R.; Laarakkers, J.; Warmer, C.

    2012-01-01

    Response of demand, distributed generation and electricity storage (e.g. vehicle to grid) will be crucial for power systems management in the future smart electricity grid. In this paper, we describe a smart grid technology that integrates demand and supply flexibility in the operation of the

  19. PERC 2 High-End Computer System Performance: Scalable Science and Engineering

    Daniel Reed

    2006-10-15

    During two years of SciDAC PERC-2, our activities had centered largely on development of new performance analysis techniques to enable efficient use on systems containing thousands or tens of thousands of processors. In addition, we continued our application engagement efforts and utilized our tools to study the performance of various SciDAC applications on a variety of HPC platforms.

  20. Dynamic pricing by scalable energy management systems : field experiences and simulation results using PowerMatcher

    Kok, J.K.; Roossien, B.; MacDougall, P.; Pruissen, van O.P.; Venekamp, G.; Kamphuis, I.G.; Laarakkers, J.; Warmer, C.J.

    2012-01-01

    Response of demand, distributed generation and electricity storage (e.g. vehicle to grid) will be crucial for power systems management in the future smart electricity grid. In this paper, we describe a smart grid technology that integrates demand and supply flexibility in the operation of the