WorldWideScience

Sample records for high availability middleware

  1. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  2. The Design of a Lightweight RFID Middleware

    Directory of Open Access Journals (Sweden)

    Fengqun Lin

    2009-10-01

    Full Text Available Radio Frequency Identification (RFID middleware is often regarded as the central nervous system of RFID systems. In this paper, a lightweight RFID middleware is designed and implemented without the need of an Application Level Events (ALE structure, and its implementation process is described using a typical commerical enterprise. A short review of the current RFID middleware research and development is also included. The characteristics of RFID middleware are presented with a two-centric framework. The senarios of RFID data integration based on the simplified structure are provided to illuminats the design and implementation of the lightweight middleware structure and its development process. The lightweight middleware is easy to maintain and extend because of the simplified and streamlined structure and the short development cycle.

  3. Cloud object store for archive storage of high performance computing data using decoupling middleware

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  4. An Access Control Framework for Reflective Middleware

    Institute of Scientific and Technical Information of China (English)

    Gang Huang; Lian-Shan Sun

    2008-01-01

    Reflective middleware opens up the implementation details of middleware platform and applications at runtime for improving the adaptability of middleware-based systems. However, such openness brings new challenges to access control of the middleware-based systems.Some users can access the system via reflective entities, which sometimes cannot be protected by access control mechanisms of traditional middleware. To deliver high adaptability securely, reflective middleware should be equipped with proper access control mechanisms for potential access control holes induced by reflection. One reason of integrating these mechanisms in reflective middleware is that one goal of reflective middleware is to equip applications with reflection capabilities as transparent as possible. This paper studies how to design a reflective J2EE middlewarePKUAS with access control in mind. At first, a computation model of reflective system is built to identify all possible access control points induced by reflection. Then a set of access control mechanisms, including the wrapper of MBeans and a hierarchy of Java class loaders, are equipped for controlling the identified access control points. These mechanisms together with J2EE access control mechanism form the access control framework for PKUAS. The paper evaluates the security and the performance overheads of the framework in quality and quantity.

  5. Context Aware Middleware Architectures: Survey and Challenges

    Directory of Open Access Journals (Sweden)

    Xin Li

    2015-08-01

    Full Text Available Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work.

  6. Consolidation and development roadmap of the EMI middleware

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfill the requirements of the user communities. This paper presents the consolidation and development objectives of the ...

  7. Development of DAQ-Middleware

    International Nuclear Information System (INIS)

    Yasu, Y; Nakayoshi, K; Sendai, H; Inoue, E; Tanaka, M; Suzuki, S; Satoh, S; Muto, S; Otomo, T; Nakatani, T; Uchida, T; Ando, N; Kotoku, T; Hirano, S

    2010-01-01

    DAQ-Middleware is a software framework of network-distributed DAQ system based on Robot Technology Middleware, which is an international standard of Object Management Group (OMG) in Robotics and its implementation was developed by AIST. DAQ-Component is a software unit of DAQ-Middleware. Basic components have been already developed. For examples, Gatherer is a readout component, Logger is a data logging component, Monitor is an analysis component and Dispatcher, which is connected to Gatherer as input of data path and to Logger/Monitor as output of data path. DAQ operator is a special component, which controls those components by using the control/status path. The control/status path and data path as well as XML-based system configuration and XML/HTTP-based system interface are well defined in DAQ-Middleware framework. DAQ-Middleware was adopted by experiments at J-PARC while the commissioning at the first beam had been successfully carried out. The functionality of DAQ-Middleware and the status of DAQ-Middleware at J-PARC are presented.

  8. A survey of secure middleware for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Paul Fremantle

    2017-05-01

    Full Text Available The rapid growth of small Internet connected devices, known as the Internet of Things (IoT, is creating a new set of challenges to create secure, private infrastructures. This paper reviews the current literature on the challenges and approaches to security and privacy in the Internet of Things, with a strong focus on how these aspects are handled in IoT middleware. We focus on IoT middleware because many systems are built from existing middleware and these inherit the underlying security properties of the middleware framework. The paper is composed of three main sections. Firstly, we propose a matrix of security and privacy threats for IoT. This matrix is used as the basis of a widespread literature review aimed at identifying requirements on IoT platforms and middleware. Secondly, we present a structured literature review of the available middleware and how security is handled in these middleware approaches. We utilise the requirements from the first phase to evaluate. Finally, we draw a set of conclusions and identify further work in this area.

  9. Research of the Communication Middleware of the Yacht Supervision Management System Based on DDS

    Directory of Open Access Journals (Sweden)

    Wang Yan-Ru

    2016-01-01

    Full Text Available The communication middleware of the yacht supervision management system (YSM which is based on the DDS is the communication management software of the yacht supervision center system and the ship monitoring system. In order to ensure the high efficiency and high reliability of communication middleware, the paper for the first time introduced DDS communication framework to the yacht supervision system in the process of software design and implementation, and designed and implemented more flexible and reliable communication management interface for the DDS communication framework. Through practical test, each performance index of DDS communication middleware software of the yacht supervision management system has reached the design requirements.

  10. A DAQ system for CAMAC controller CC/NET using DAQ-Middleware

    International Nuclear Information System (INIS)

    Inoue, E; Yasu, Y; Nakayoshi, K; Sendai, H

    2010-01-01

    DAQ-Middleware is a framework for the DAQ system which is based on RT-Middleware (Robot Technology Middleware) and dedicated to making DAQ systems. DAQ-Middleware has come into use as a one of the DAQ system framework for the next generation Particle Physics experiment at KEK in recent years. DAQ-Middleware comprises DAQ-Components with all necessary basic functions of the DAQ and is easily extensible. So, using DAQ-Middleware, you are able to construct easily your own DAQ system by combining these components. As an example, we have developed a DAQ system for a CC/NET [1] using DAQ-Middleware by the addition of GUI part and CAMAC readout part. The CC/NET, the CAMAC controller was developed to accomplish high speed read-out of CAMAC data. The basic design concept of CC/NET is to realize data taking through networks. So, it is consistent with the DAQ-Middleware concept. We show how it is convenient to use DAQ-Middleware.

  11. Middleware Evaluation and Benchmarking for Use in Mission Operations Centers

    Science.gov (United States)

    Antonucci, Rob; Waktola, Waka

    2005-01-01

    Middleware technologies have been promoted as timesaving, cost-cutting alternatives to the point-to-point communication used in traditional mission operations systems. However, missions have been slow to adopt the new technology. The lack of existing middleware-based missions has given rise to uncertainty about middleware's ability to perform in an operational setting. Most mission architects are also unfamiliar with the technology and do not know the benefits and detriments to architectural choices - or even what choices are available. We will present the findings of a study that evaluated several middleware options specifically for use in a mission operations system. We will address some common misconceptions regarding the applicability of middleware-based architectures, and we will identify the design decisions and tradeoffs that must be made when choosing a middleware solution. The Middleware Comparison and Benchmark Study was conducted at NASA Goddard Space Flight Center to comprehensively evaluate candidate middleware products, compare and contrast the performance of middleware solutions with the traditional point- to-point socket approach, and assess data delivery and reliability strategies. The study focused on requirements of the Global Precipitation Measurement (GPM) mission, validating the potential use of middleware in the GPM mission ground system. The study was jointly funded by GPM and the Goddard Mission Services Evolution Center (GMSEC), a virtual organization for providing mission enabling solutions and promoting the use of appropriate new technologies for mission support. The study was broken into two phases. To perform the generic middleware benchmarking and performance analysis, a network was created with data producers and consumers passing data between themselves. The benchmark monitored the delay, throughput, and reliability of the data as the characteristics were changed. Measurements were taken under a variety of topologies, data demands

  12. A principled approach to grid middleware

    DEFF Research Database (Denmark)

    Berthold, Jost; Bardino, Jonas; Vinter, Brian

    2011-01-01

    This paper provides an overview of MiG, a Grid middleware for advanced job execution, data storage and group collaboration in an integrated, yet lightweight solution using standard software. In contrast to most other Grid middlewares, MiG is developed with a particular focus on usability and mini......This paper provides an overview of MiG, a Grid middleware for advanced job execution, data storage and group collaboration in an integrated, yet lightweight solution using standard software. In contrast to most other Grid middlewares, MiG is developed with a particular focus on usability...... and minimal system requirements, applying strict principles to keep the middleware free of legacy burdens and overly complicated design. We provide an overview of MiG and describe its features in view of the Grid vision and its relation to more recent cloud computing trends....

  13. The next-generation ARC middleware

    DEFF Research Database (Denmark)

    Appleton, O.; Cameron, D.; Cernak, J.

    2010-01-01

    The Advanced Resource Connector (ARC) is a light-weight, non-intrusive, simple yet powerful Grid middleware capable of connecting highly heterogeneous computing and storage resources. ARC aims at providing general purpose, flexible, collaborative computing environments suitable for a range of use...

  14. Consolidation and development roadmap of the EMI middleware

    International Nuclear Information System (INIS)

    Kónya, B; Aiftimiei, C; Cecchi, M; Field, L; Fuhrmann, P; Nilsen, J K; White, J

    2012-01-01

    Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information

  15. Consolidation and development roadmap of the EMI middleware

    Science.gov (United States)

    Kónya, B.; Aiftimiei, C.; Cecchi, M.; Field, L.; Fuhrmann, P.; Nilsen, J. K.; White, J.

    2012-12-01

    Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information

  16. Multicast middleware for performance and topology analysis of multimedia grids

    Directory of Open Access Journals (Sweden)

    Jerry Z. Xie

    2017-04-01

    Full Text Available Since multicast reduces bandwidth consumption in multimedia grid computing, the middleware for monitoring the performance and topology of multicast communications is important to the design and management of multimedia grid applications. However, the current middleware technologies for multicast performance monitoring are still far from attaining the level of maturity and there lacks consistent approaches to obtain the evaluation data for multicast. In this study, to serve a clear guide for the design and implementation of the multicast middleware, two algorithms are developed for organising all constituents in multicast communications and analysing the multicast performance in two topologies – ‘multicast distribution tree’ and ‘clusters distribution’, and a definitive set of corresponding metrics that are comprehensive yet viable for evaluating multicast communications are also presented. Instead of using the inference data from unicast measurements, in the proposed middleware, the measuring data of multicast traffic are obtained directly from multicast protocols in real time. Moreover, this study makes a middleware implementation which is integrated into a real access grid multicast communication infrastructure. The results of the implementation demonstrate the substantial improvements in the accuracy and real time in evaluating the performance and topology of multicast network.

  17. Evaluating ITER remote handling middleware concepts

    International Nuclear Information System (INIS)

    Koning, J.F.; Heemskerk, C.J.M.; Schoen, P.; Smedinga, D.; Boode, A.H.; Hamilton, D.T.

    2013-01-01

    Highlights: ► Remote Handling Study Centre: middleware system setup and modules built. ► Aligning to ITER RH Control System Layout: prototype of database, VR and simulator. ► OpenSplice DDS, ZeroC ICE messaging and object oriented middlewares reviewed. ► Windows network latency found problematic for semi-realtime control over the network. -- Abstract: Remote maintenance activities in ITER will be performed by a unique set of hardware systems, supported by an extensive software kit. A layer of middleware will manage and control a complex set of interconnections between teams of operators, hardware devices in various operating theatres, and databases managing tool and task logistics. The middleware is driven by constraints on amounts and timing of data like real-time control loops, camera images, and database access. The Remote Handling Study Centre (RHSC), located at FOM institute DIFFER, has a 4-operator work cell in an ITER relevant RH Control Room setup which connects to a virtual hot cell back-end. The centre is developing and testing flexible integration of the Control Room components, resulting in proof-of-concept tests of this middleware layer. SW components studied include generic human-machine interface software, a prototype of a RH operations management system, and a distributed virtual reality system supporting multi-screen, multi-actor, and multiple independent views. Real-time rigid body dynamics and contact interaction simulation software supports simulation of structural deformation, “augmented reality” operations and operator training. The paper presents generic requirements and conceptual design of middleware components and Operations Management System in the context of a RH Control Room work cell. The simulation software is analyzed for real-time performance and it is argued that it is critical for middleware to have complete control over the physical network to be able to guarantee bandwidth and latency to the components

  18. Evaluating ITER remote handling middleware concepts

    Energy Technology Data Exchange (ETDEWEB)

    Koning, J.F., E-mail: j.f.koning@differ.nl [FOM Institute DIFFER, Association EURATOM-FOM, Partner in the Trilateral Euregio Cluster and ITER-NL, PO Box 1207, 3430 BE Nieuwegein (Netherlands); Heemskerk, C.J.M.; Schoen, P.; Smedinga, D. [Heemskerk Innovative Technology, Noordwijk (Netherlands); Boode, A.H. [University of Applied Sciences InHolland, Alkmaar (Netherlands); Hamilton, D.T. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France)

    2013-10-15

    Highlights: ► Remote Handling Study Centre: middleware system setup and modules built. ► Aligning to ITER RH Control System Layout: prototype of database, VR and simulator. ► OpenSplice DDS, ZeroC ICE messaging and object oriented middlewares reviewed. ► Windows network latency found problematic for semi-realtime control over the network. -- Abstract: Remote maintenance activities in ITER will be performed by a unique set of hardware systems, supported by an extensive software kit. A layer of middleware will manage and control a complex set of interconnections between teams of operators, hardware devices in various operating theatres, and databases managing tool and task logistics. The middleware is driven by constraints on amounts and timing of data like real-time control loops, camera images, and database access. The Remote Handling Study Centre (RHSC), located at FOM institute DIFFER, has a 4-operator work cell in an ITER relevant RH Control Room setup which connects to a virtual hot cell back-end. The centre is developing and testing flexible integration of the Control Room components, resulting in proof-of-concept tests of this middleware layer. SW components studied include generic human-machine interface software, a prototype of a RH operations management system, and a distributed virtual reality system supporting multi-screen, multi-actor, and multiple independent views. Real-time rigid body dynamics and contact interaction simulation software supports simulation of structural deformation, “augmented reality” operations and operator training. The paper presents generic requirements and conceptual design of middleware components and Operations Management System in the context of a RH Control Room work cell. The simulation software is analyzed for real-time performance and it is argued that it is critical for middleware to have complete control over the physical network to be able to guarantee bandwidth and latency to the components.

  19. High availability using virtualization

    International Nuclear Information System (INIS)

    Calzolari, Federico; Arezzini, Silvia; Ciampa, Alberto; Mazzoni, Enrico; Domenici, Andrea; Vaglini, Gigliola

    2010-01-01

    High availability has always been one of the main problems for a data center. Till now high availability was achieved by host per host redundancy, a highly expensive method in terms of hardware and human costs. A new approach to the problem can be offered by virtualization. Using virtualization, it is possible to achieve a redundancy system for all the services running on a data center. This new approach to high availability allows the running virtual machines to be distributed over a small number of servers, by exploiting the features of the virtualization layer: start, stop and move virtual machines between physical hosts. The 3RC system is based on a finite state machine, providing the possibility to restart each virtual machine over any physical host, or reinstall it from scratch. A complete infrastructure has been developed to install operating system and middleware in a few minutes. To virtualize the main servers of a data center, a new procedure has been developed to migrate physical to virtual hosts. The whole Grid data center SNS-PISA is running at the moment in virtual environment under the high availability system.

  20. Roadmap for the ARC Grid Middleware

    DEFF Research Database (Denmark)

    Kleist, Josva; Eerola, Paula; Ekelöf, Tord

    2006-01-01

    The Advanced Resource Connector (ARC) or the NorduGrid middleware is an open source software solution enabling production quality computational and data Grids, with special emphasis on scalability, stability, reliability and performance. Since its first release in May 2002, the middleware is depl...

  1. Study on the Context-Aware Middleware for Ubiquitous Greenhouses Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jeonghwang Hwang

    2011-04-01

    Full Text Available Wireless Sensor Network (WSN technology is one of the important technologies to implement the ubiquitous society, and it could increase productivity of agricultural and livestock products, and secure transparency of distribution channels if such a WSN technology were successfully applied to the agricultural sector. Middleware, which can connect WSN hardware, applications, and enterprise systems, is required to construct ubiquitous agriculture environment combining WSN technology with agricultural sector applications, but there have been insufficient studies in the field of WSN middleware in the agricultural environment, compared to other industries. This paper proposes a context-aware middleware to efficiently process data collected from ubiquitous greenhouses by applying WSN technology and used to implement combined services through organic connectivity of data. The proposed middleware abstracts heterogeneous sensor nodes to integrate different forms of data, and provides intelligent context-aware, event service, and filtering functions to maximize operability and scalability of the middleware. To evaluate the performance of the middleware, an integrated management system for ubiquitous greenhouses was implemented by applying the proposed middleware to an existing greenhouse, and it was tested by measuring the level of load through CPU usage and the response time for users’ requests when the system is working.

  2. An RFID middleware for supply chain management

    Directory of Open Access Journals (Sweden)

    Artur Pinto Carneiro

    2011-12-01

    Full Text Available RFID (Radio Frequency Identification systems for identification and tracking of products and equipments have been progressively adopted as an essential tool for supply chain management, a production environment where the members usually share with each other their own logistic and management systems. Therefore, the development of supply chain RFID systems can be strongly simplified through the inclusion of an intermediate software layer – the middleware – responsible for the creation of interfaces to integrate all the heterogeneous software components. In this article we present a case study developed at IPT (Instituto de Pesquisas Tecnológicas do Estado de São Paulo which gave rise to a middleware prototype able to implement the required software integration on the supply chain of an electric power distribution company. The developed middleware is used to manage the interactions with a heterogeneous group of mobile devices  cell phones, handhelds and data colectors,  operated by different supply chain agents that grab data associated to various processes executed by a given electric power distribution equipment during its life cycle and transfer those data to a central database in order to share them with all the logistic and management corporation systems.

  3. Wireless sensors in heterogeneous networked systems configuration and operation middleware

    CERN Document Server

    Cecilio, José

    2014-01-01

    This book presents an examination of the middleware that can be used to configure and operate heterogeneous node platforms and sensor networks. The middleware requirements for a range of application scenarios are compared and analysed. The text then defines middleware architecture that has been integrated in an approach demonstrated live in a refinery. Features: presents a thorough introduction to the major concepts behind wireless sensor networks (WSNs); reviews the various application scenarios and existing middleware solutions for WSNs; discusses the middleware mechanisms necessary for hete

  4. Middleware for big data processing: test results

    Science.gov (United States)

    Gankevich, I.; Gaiduchok, V.; Korkhov, V.; Degtyarev, A.; Bogdanov, A.

    2017-12-01

    Dealing with large volumes of data is resource-consuming work which is more and more often delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some critical size, much effort should be put into improving its fault tolerance. It is difficult to estimate when some particular distributed system needs such facilities for a given workload, so instead they should be implemented in a middleware which works efficiently with a distributed system of any size. It is also difficult to estimate whether a volume of data is large or not, so the middleware should also work with data of any volume. In other words, the purpose of the middleware is to provide facilities that adapt distributed computing system for a given workload. In this paper we introduce such middleware appliance. Tests show that this middleware is well-suited for typical HPC and big data workloads and its performance is comparable with well-known alternatives.

  5. Sensor Network Middleware for Cyber-Physical Systems: Opportunities and Challenges

    Science.gov (United States)

    Singh, G.

    2015-12-01

    Wireless Sensor Network middleware typically provides abstractions for common tasks such as atomicity, synchronization and communication with the intention of isolating the developers of distributed applications from lower-level details of the underlying platforms. Developing middleware to meet the performance constraints of applications is an important challenge. Although one would like to develop generic middleware services which can be used in a variety of different applications, efficiency considerations often force developers to design middleware and algorithms customized to specific operational contexts. This presentation will discuss techniques to design middleware that is customizable to suit the performance needs of specific applications. We also discuss the challenges poised in designing middleware for pervasive sensor networks and cyber-physical systems with specific focus on environmental monitoring.

  6. A Middleware Architecture for RFID-enabled traceability of air baggage

    Directory of Open Access Journals (Sweden)

    Bouhouche T.

    2017-01-01

    Full Text Available 1980 marked the start of a boom in radiofrequency identification (RFID technology, initially associated with a growing need for traceability. In view of the technological progress and lower costs, RFID’s area of application became much broader and, today, multiple business sectors take advantage of this technology. However, in order to achieve the maximum benefits of RFID technology, the data collected should be delivered in the best conditions to the whole applications that have need of its exploitation. For that, a dedicated middleware solution is required to ensure the collection of RFID data and their integration in information systems. The issues and key points of this integration as the description of the RFID technology will be summarized in the present paper, with a new middleware architecture. We focus mainly on components and the design of our middleware MedRFID, solution developed in our Lab, and which integrates mobility and provides extensibility, scalability, abstraction, ease of deployment and compatibility with IATA standards and EPCglobal standards. Moreover, we have developed an application (FindLuggage allowing a real time tracking of luggage in the airport, based on the proposed middleware MedRFID.

  7. Middleware for the next generation Grid infrastructure

    CERN Document Server

    Laure, E; Prelz, F; Beco, S; Fisher, S; Livny, M; Guy, L; Barroso, M; Buncic, P; Kunszt, Peter Z; Di Meglio, A; Aimar, A; Edlund, A; Groep, D; Pacini, F; Sgaravatto, M; Mulmo, O

    2005-01-01

    The aim of the EGEE (Enabling Grids for E-Science in Europe) project is to create a reliable and dependable European Grid infrastructure for e-Science. The objective of the EGEE Middleware Re-engineering and Integration Research Activity is to provide robust middleware components, deployable on several platforms and operating systems, corresponding to the core Grid services for resource access, data management, information collection, authentication & authorization, resource matchmaking and brokering, and monitoring and accounting. For achieving this objective, we developed an architecture and design of the next generation Grid middleware leveraging experiences and existing components essentially from AliEn, EDG, and VDT. The architecture follows the service breakdown developed by the LCG ARDA group. Our strategy is to do as little original development as possible but rather re-engineer and harden existing Grid services. The evolution of these middleware components towards a Service Oriented Architecture ...

  8. Using Context Awareness for Self Management in Pervasive Service Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius

    2011-01-01

    Context-awareness is an important feature in Ambient Intelligence environments including in pervasive middleware. In addition, there is a growing trend and demand on self-management capabilities for a pervasive middleware in order to provide high-level dependability for services. In this chapter......, we propose to make use of context-awareness features to facilitate self-management. To achieve self-management, dynamic contexts for example device and service statuses, are critical to take self-management actions. Therefore, we consider dynamic contexts in context modeling, specifically as a set...... of OWL/SWRL ontologies, called the Self-Management for Pervasive Services (SeMaPS) ontologies. Self-management rules can be developed based on the SeMaPS ontologies to achieve self-management goals. Our approach is demonstrated within the LinkSmart pervasive middleware. Finally, our experiments...

  9. Middleware Interoperability for Robotics: A ROS-YARP Framework

    Directory of Open Access Journals (Sweden)

    Plinio Moreno

    2016-10-01

    Full Text Available Middlewares are fundamental tools for progress in research and applications in robotics. They enable the integration of multiple heterogeneous sensing and actuation devices, as well as providing general purpose modules for key robotics functions (kinematics, navigation, planning. However, no existing middleware yet provides a complete set of functionalities for all robotics applications, and many robots may need to rely on more than one framework. This paper focuses on the interoperability between two of the most prevalent middleware in robotics: YARP and ROS. Interoperability between middlewares should ideally allow users to execute existing software without the necessity of: (i changing the existing code, and (ii writing hand-coded ``bridges'' for each use-case. We propose a framework enabling the communication between existing YARP modules and ROS nodes for robotics applications in an automated way. Our approach generates the ``bridging gap'' code from a configuration file, connecting YARP ports and ROS topics through code-generated YARP Bottles. %%The configuration file must describe: (i the sender entities, (ii the way to group and convert the information read from the sender, (iii the structure of the output message and (iv the receiving entity. Our choice for the many inputs to one output is the most common use-case in robotics applications, where examples include filtering, decision making and visualization. %We support YARP/ROS and ROS/YARP sender/receiver configurations, which are demonstrated in a humanoid on wheels robot that uses YARP for upper body motor control and visual perception, and ROS for mobile base control and navigation algorithms.

  10. A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wu, Yifu [University of Akron; Wei, Jin [University of Akron

    2017-07-31

    Distributed Energy Resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. As most of these generators are geographically dispersed, dedicated communications investments for every generator are capital cost prohibitive. Real-time distributed communications middleware, which supervises, organizes and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs, allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the Quality of Experience (QoE) measures to complement the conventional Quality of Service (QoS) information to detect and mitigate the congestion attacks effectively. The simulation results illustrate the efficiency of our proposed communications middleware architecture.

  11. A survey of middleware for sensor and network virtualization.

    Science.gov (United States)

    Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd

    2014-12-12

    Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization.

  12. A Survey of Middleware for Sensor and Network Virtualization

    Science.gov (United States)

    Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd.

    2014-01-01

    Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization. PMID:25615737

  13. MinT: Middleware for Cooperative Interaction of Things.

    Science.gov (United States)

    Jeon, Soobin; Jung, Inbum

    2017-06-20

    This paper proposes an Internet of Things (IoT) middleware called Middleware for Cooperative Interaction of Things (MinT). MinT supports a fully distributed IoT environment in which IoT devices directly connect to peripheral devices easily construct a local or global network, and share their data in an energy efficient manner. MinT provides a sensor abstract layer, a system layer and an interaction layer. These enable integrated sensing device operations, efficient resource management, and active interconnection between peripheral IoT devices. In addition, MinT provides a high-level API to develop IoT devices easily for IoT device developers. We aim to enhance the energy efficiency and performance of IoT devices through the performance improvements offered by MinT resource management and request processing. The experimental results show that the average request rate increased by 25% compared to Californium, which is a middleware for efficient interaction in IoT environments with powerful performance, an average response time decrease of 90% when resource management was used, and power consumption decreased by up to 68%. Finally, the proposed platform can reduce the latency and power consumption of IoT devices.

  14. An Approach for Designing and Implementing Middleware in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ronald Beaubrun

    2012-03-01

    Full Text Available In this paper, we propose an approach for designing and implementing a middleware for data dissemination in Wireless Sensor Networks (WSNs. The designing aspect considers three perspectives: device, network and application. Each application layer is implemented as an independent Component Object Model (COM Project which offers portability, security, reusability and domain expertise encapsulation. For result analysis, the percentage of success is used as performance parameter. Such analysis reveals that the middleware enables to greatly increase the percentage of success of the messages disseminated in a WSN.

  15. A novel class of laboratory middleware. Promoting information flow and improving computerized provider order entry.

    Science.gov (United States)

    Grisson, Ricky; Kim, Ji Yeon; Brodsky, Victor; Kamis, Irina K; Singh, Balaji; Belkziz, Sidi M; Batra, Shalini; Myers, Harold J; Demyanov, Alexander; Dighe, Anand S

    2010-06-01

    A central duty of the laboratory is to inform clinicians about the availability and usefulness of laboratory testing. In this report, we describe a new class of laboratory middleware that connects the traditional clinical laboratory information system with the rest of the enterprise, facilitating information flow about testing services. We demonstrate the value of this approach in efficiently supporting an inpatient order entry application. We also show that order entry monitoring and iterative middleware updates can enhance ordering efficiency and promote improved ordering practices. Furthermore, we demonstrate the value of algorithmic approaches to improve the accuracy and completeness of laboratory test searches. We conclude with a discussion of design recommendations for middleware applications and discuss the potential role of middleware as a sharable, centralized repository of laboratory test information.

  16. A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Yifu; Wei, Jin; Hodge, Bri-Mathias

    2017-05-24

    Distributed energy resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. Because most of these generators are geographically dispersed, dedicated communications investments for every generator are capital-cost prohibitive. Real-time distributed communications middleware - which supervises, organizes, and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs - allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the quality of experience measures to complement the conventional quality of service information to effectively detect and mitigate congestion attacks. The simulation results illustrate the efficiency of our proposed communications middleware architecture.

  17. Middleware Trends And Market Leaders 2011

    CERN Document Server

    Dworak, A; Ehm, F; Sliwinski, W; Sobczak, M

    2011-01-01

    The Controls Middleware (CMW) project was launched over ten years ago. Its main goal was to unify middleware solutions used to operate CERN accelerators. An important part of the project, the equipment access library RDA, was based on CORBA, an unquestionable standard at the time. RDA became an operational and critical part of the infrastructure, yet the demanding run-time environment revealed some shortcomings of the system. Accumulation of fixes and workarounds led to unnecessary complexity. RDA became difficult to maintain and to extend. CORBA proved to be rather a cumbersome product than a panacea. Fortunately, many new transport frameworks appeared since then. They boasted a better design and supported concepts that made them easy to use. Willing to profit from the new libraries, the CMW team updated user requirements and in their terms investigated eventual CORBA substitutes. The process consisted of several phases: a review of middleware solutions belonging to different categories (e.g. data-centric, o...

  18. Open client/server computing and middleware

    CERN Document Server

    Simon, Alan R

    2014-01-01

    Open Client/Server Computing and Middleware provides a tutorial-oriented overview of open client/server development environments and how client/server computing is being done.This book analyzes an in-depth set of case studies about two different open client/server development environments-Microsoft Windows and UNIX, describing the architectures, various product components, and how these environments interrelate. Topics include the open systems and client/server computing, next-generation client/server architectures, principles of middleware, and overview of ProtoGen+. The ViewPaint environment

  19. Towards Self-managed Pervasive Middleware using OWL/SWRL ontologies

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius

    2008-01-01

    Self-management for pervasive middleware is important to realize the Ambient Intelligence vision. In this paper, we present an OWL/SWRL context ontologies based self-management approach for pervasive middleware where OWL ontology is used as means for context modeling. The context ontologies....../SWRL context ontologies based self-management approach with the self-diagnosis in Hydra middleware, using device state machine and other dynamic context information, for example web service calls. The evaluations in terms of extensibility, performance and scalability show that this approach is effective...

  20. Enhancing Intelligence of a Product Line Enabled Pervasive Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius; Kunz, Thomas

    2010-01-01

    To provide good support for user-centered application scenarios in pervasive computing environments, pervasive middleware must react to context changes and prepare services accordingly. At the same time, pervasive middleware should provide extended dependability via self-management capabilities, ...

  1. Experiences in messaging middle-ware for high-level control applications

    International Nuclear Information System (INIS)

    Wanga, N.; Shasharina, S.; Matykiewicz, J.; Rooparani Pundaleeka

    2012-01-01

    Existing high-level applications in accelerator control and modeling systems leverage many different languages, tools and frameworks that do not inter-operate with one another. As a result, the accelerator control community is moving toward the proven Service-Oriented Architecture (SOA) approach to address the inter-operability challenges among heterogeneous high-level application modules. Such SOA approach enables developers to package various control subsystems and activities into 'services' with well-defined 'interfaces' and make leveraging heterogeneous high-level applications via flexible composition possible. Examples of such applications include presentation panel clients based on Control System Studio (CSS) and middle-layer applications such as model/data servers. This paper presents our experiences in developing a demonstrative high-level application environment using emerging messaging middle-ware standards. In particular, we utilize new features in EPICS v4 and other emerging standards such as Data Distribution Service (DDS) and Extensible Type Interface by the Object Management Group. We first briefly review examples we developed previously. We then present our current effort in integrating DDS into such a SOA environment for control systems. Specifically, we illustrate how we are integrating DDS into CSS and develop a Python DDS mapping. (authors)

  2. CROWN: A service grid middleware with trust management mechanism

    Institute of Scientific and Technical Information of China (English)

    HUAI Jinpeng; HU Chunming; LI Jianxin; SUN Hailong; WO Tianyu

    2006-01-01

    Based on a proposed Web service-based grid architecture, a service grid middleware system called CROWN is designed in this paper. As the two kernel points of the middleware, the overlay-based distributed grid resource management mechanism is proposed, and the policy-based distributed access control mechanism with the capability of automatic negotiation of the access control policy and trust management and negotiation is also discussed in this paper. Experience of CROWN testbed deployment and application development shows that the middleware can support the typical scenarios such as computing-intensive applications, data-intensive applications and mass information processing applications.

  3. PerPos: a Translucent Positioning Middleware Supporting Adaptation of Internal Positioning Processes

    DEFF Research Database (Denmark)

    Jensen, Jakob Langdal; Schougaard, Kari Rye; Kjærgaard, Mikkel Baun

    2010-01-01

    of application specific features that can be applied to the internal position processing of the middleware. To evaluate these capabilities we extend the internal position processing of the middleware with functionality supporting probabilistic position tracking and strategies for minimization of the energy......A positioning middleware benefits the development of location aware applications. Traditionally, positioning middleware provides position transparency in the sense that it hides low-level details. However, many applications require access to specific details of the usually hidden positioning...

  4. Middleware for multi-client and multi-server mobile applications

    NARCIS (Netherlands)

    Rocha, B.P.S.; Rezende, C.G.; Loureiro, A.A.F.

    2007-01-01

    With popularization of mobile computing, many developers have faced problems due to great heterogeneity of devices. To address this issue, we present in this work a middleware for multi-client and multi-server mobile applications. We assume that the middleware at the server side has no resource

  5. The role of scientific middleware in the future of HEP computing

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    In the 18 months since the CHEP03 meeting in San Diego, the HEP community deployed the current generation of grid technologies in a veracity of settings. Legacy software as well as recently developed applications was interfaced with middleware tools to deliver end-to-end capabilities to HEP experiments in different stages of their life cycles. In a series of data challenges, reprocessing efforts and data distribution activities the community demonstrated the benefits distributed computing can offer and the power a range of middleware tools can deliver. After running millions of jobs, moving tera-bytes of data, creating millions of files and resolving hundreds of bug reports, the community also exposed the limitations of these middleware tools. As we move to the next level of challenges, requirements and expectations, we must also examine the methods and procedures we employ to develop, implement and maintain our common suite of middleware tools. The talk will focus on the role common middleware ...

  6. Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution

    CERN Document Server

    Sliwinski, W; Dworak, A

    2014-01-01

    Nowadays, all major infrastructures and data centres (commercial and scientific) make an extensive use of the publish-subscribe messaging paradigm, which helps to decouple the message sender (publisher) from the message receiver (consumer). This paradigm is also heavily used in the CERN Accelerator Control system, in Proxy broker - critical part of the Controls Middleware (CMW) project. Proxy provides the aforementioned publish-subscribe facility and also supports execution of synchronous read and write operations. Moreover, it enables service scalability and dramatically reduces the network resources and overhead (CPU and memory) on publisher machine, required to serve all subscriptions. Proxy was developed in modern C++, using state of the art programming techniques (e.g. Boost) and following recommended software patterns for achieving low-latency and high concurrency. The outstanding performance of the Proxy infrastructure was confirmed during the last 3 years by delivering the high volume of LHC equipment...

  7. Middleware Support for Quality of Context in Pervasive Context-Aware Systems

    NARCIS (Netherlands)

    Sheikh, K.; Wegdam, M.; van Sinderen, Marten J.

    Middleware support for pervasive context-aware systems relieves context-aware applications from dealing with the complexity of context-specific operations such as context acquisition, aggregation, reasoning and distribution. The middleware decouples applications from the underlying heterogeneous

  8. CRAVE: a database, middleware and visualization system for phenotype ontologies.

    Science.gov (United States)

    Gkoutos, Georgios V; Green, Eain C J; Greenaway, Simon; Blake, Andrew; Mallon, Ann-Marie; Hancock, John M

    2005-04-01

    A major challenge in modern biology is to link genome sequence information to organismal function. In many organisms this is being done by characterizing phenotypes resulting from mutations. Efficiently expressing phenotypic information requires combinatorial use of ontologies. However tools are not currently available to visualize combinations of ontologies. Here we describe CRAVE (Concept Relation Assay Value Explorer), a package allowing storage, active updating and visualization of multiple ontologies. CRAVE is a web-accessible JAVA application that accesses an underlying MySQL database of ontologies via a JAVA persistent middleware layer (Chameleon). This maps the database tables into discrete JAVA classes and creates memory resident, interlinked objects corresponding to the ontology data. These JAVA objects are accessed via calls through the middleware's application programming interface. CRAVE allows simultaneous display and linking of multiple ontologies and searching using Boolean and advanced searches.

  9. XRFID: Design of an XML Based Efficient Middleware for RFID Systems

    Directory of Open Access Journals (Sweden)

    Indrajit Bhattacharya

    2012-11-01

    existing middleware solutions show dramatic degradation in their performance when the number of simultaneously working readers increases. Our proposed solution tries to recover from that situation also. One of the major issues for large scale deployment of RFID systems is the design of a robust and flexible middleware system to interface various applications to the RFID readers. Most of the existing RFID middleware systems are costly, bulky, non‐portable and heavily dependent on the support software. Our work also provides flexibility for easy addition and removal of applications and hardware.

  10. ECHO Services: Foundational Middleware for a Science Cyberinfrastructure

    Science.gov (United States)

    Burnett, Michael

    2005-01-01

    This viewgraph presentation describes ECHO, an interoperability middleware solution. It uses open, XML-based APIs, and supports net-centric architectures and solutions. ECHO has a set of interoperable registries for both data (metadata) and services, and provides user accounts and a common infrastructure for the registries. It is built upon a layered architecture with extensible infrastructure for supporting community unique protocols. It has been operational since November, 2002 and it available as open source.

  11. Real-world Bluetooth MANET Java Middleware

    DEFF Research Database (Denmark)

    Glenstrup, Arne John; Nielsen, Michael; Skytte, Frederik

    We present BEDnet, a Java based middleware for creating and maintaining a Bluetooth based mobile ad-hoc network (MANET). MANETs are key to nomadic computing: Mobile units can set up spontaneous local networks when needed, removing the need for fixed network infrastructure, either as wireless access....... Based on the Java JSR-82 specification, BEDnet is portable to a wide selection of mobile phones, and is publicly available as open source software. Experiments show that e.g. media streaming over Bluetooth is feasible, and that BEDnet is able to set up a scatternet within a couple of minutes...

  12. ROLE OF MIDDLEWARE FOR INTERNET OF THINGS: A STUDY

    OpenAIRE

    Soma Bandyopadhyay; Munmun Sengupta; Souvik Maiti; Subhajit Dutta

    2011-01-01

    Internet of Things (IoT) has been recognized as a part of future internet and ubiquitous computing. Itcreates a true ubiquitous or smart environment. It demands a complex distributed architecture withnumerous diverse components, including the end devices and application and association with theircontext. This article provides the significance of middleware system for (IoT). The middleware for IoTacts as a bond joining the heterogeneous domains of applications communicating over heterogeneousi...

  13. A Middleware with Comprehensive Quality of Context Support for the Internet of Things Applications.

    Science.gov (United States)

    Gomes, Berto de Tácio Pereira; Muniz, Luiz Carlos Melo; da Silva E Silva, Francisco José; Dos Santos, Davi Viana; Lopes, Rafael Fernandes; Coutinho, Luciano Reis; Carvalho, Felipe Oliveira; Endler, Markus

    2017-12-08

    Context aware systems are able to adapt their behavior according to the environment in which the user is. They can be integrated into an Internet of Things (IoT) infrastructure, allowing a better perception of the user's physical environment by collecting context data from sensors embedded in devices known as smart objects. An IoT extension called the Internet of Mobile Things (IoMT) suggests new scenarios in which smart objects and IoT gateways can move autonomously or be moved easily. In a comprehensive view, Quality of Context (QoC) is a term that can express quality requirements of context aware applications. These requirements can be those related to the quality of information provided by the sensors (e.g., accuracy, resolution, age, validity time) or those referring to the quality of the data distribution service (e.g, reliability, delay, delivery time). Some functionalities of context aware applications and/or decision-making processes of these applications and their users depend on the level of quality of context available, which tend to vary over time for various reasons. Reviewing the literature, it is possible to verify that the quality of context support provided by IoT-oriented middleware systems still has limitations in relation to at least four relevant aspects: (i) quality of context provisioning; (ii) quality of context monitoring; (iii) support for heterogeneous device and technology management; (iv) support for reliable data delivery in mobility scenarios. This paper presents two main contributions: (i) a state-of-the-art survey specifically aimed at analyzing the middleware with quality of context support and; (ii) a new middleware with comprehensive quality of context support for Internet of Things Applications. The proposed middleware was evaluated and the results are presented and discussed in this article, which also shows a case study involving the development of a mobile remote patient monitoring application that was developed using the

  14. Exposing Position Uncertainty in Middleware

    DEFF Research Database (Denmark)

    Langdal, Jakob; Kjærgaard, Mikkel Baun; Toftkjær, Thomas

    2010-01-01

    Traditionally, the goal for positioning middleware is to provide developers with seamless position transparency, i.e., providing a connection between the application domain and the positioning sensors while hiding the complexity of the positioning technologies in use. A key part of the hidden com...

  15. Experience Supporting the Integration of LHC Experiments Software Framework with the LCG Middleware

    CERN Document Server

    Santinelli, Roberto

    2006-01-01

    The LHC experiments are currently preparing for data acquisition in 2007 and because of the large amount of required computing and storage resources, they decided to embrace the grid paradigm. The LHC Computing Project (LCG) provides and operates a computing infrastructure suitable for data handling, Monte Carlo production and analysis. While LCG offers a set of high level services, intended to be generic enough to accommodate the needs of different Virtual Organizations, the LHC experiments software framework and applications are very specific and focused on the computing and data models. The LCG Experiment Integration Support team works in close contact with the experiments, the middleware developers and the LCG certification and operations teams to integrate the underlying grid middleware with the experiment specific components. The strategical position between the experiments and the middleware suppliers allows EIS team to play a key role at communications level between the customers and the service provi...

  16. Interoperability of remote handling control system software modules at Divertor Test Platform 2 using middleware

    International Nuclear Information System (INIS)

    Tuominen, Janne; Rasi, Teemu; Mattila, Jouni; Siuko, Mikko; Esque, Salvador; Hamilton, David

    2013-01-01

    Highlights: ► The prototype DTP2 remote handling control system is a heterogeneous collection of subsystems, each realizing a functional area of responsibility. ► Middleware provides well-known, reusable solutions to problems, such as heterogeneity, interoperability, security and dependability. ► A middleware solution was selected and integrated with the DTP2 RH control system. The middleware was successfully used to integrate all relevant subsystems and functionality was demonstrated. -- Abstract: This paper focuses on the inter-subsystem communication channels in a prototype distributed remote handling control system at Divertor Test Platform 2 (DTP2). The subsystems are responsible for specific tasks and, over the years, their development has been carried out using various platforms and programming languages. The communication channels between subsystems have different priorities, e.g. very high messaging rate and deterministic timing or high reliability in terms of individual messages. Generally, a control system's communication infrastructure should provide interoperability, scalability, performance and maintainability. An attractive approach to accomplish this is to use a standardized and proven middleware implementation. The selection of a middleware can have a major cost impact in future integration efforts. In this paper we present development done at DTP2 using the Object Management Group's (OMG) standard specification for Data Distribution Service (DDS) for ensuring communications interoperability. DDS has gained a stable foothold especially in the military field. It lacks a centralized broker, thereby avoiding a single-point-of-failure. It also includes an extensive set of Quality of Service (QoS) policies. The standard defines a platform- and programming language independent model and an interoperability wire protocol that enables DDS vendor interoperability, allowing software developers to avoid vendor lock-in situations

  17. ProCom middleware

    OpenAIRE

    Kunčar, Jiří

    2013-01-01

    The goal of this thesis is to develop and implement parts of a middleware that provides necessary support for the execution of ProCom components on top of the real-time operating system FreeRTOS. The ProCom is a component model for embedded systems developed at Mälardalen University. The primary problem is finding an appropriate balance between the level of abstraction and thoughtful utilization of system resources in embedded devices. The defined target platform has limitations in comparison...

  18. ProCom middleware

    OpenAIRE

    Kuncar, Jiri

    2011-01-01

    The goal of this thesis is to develop and implement parts of a middleware that provides necessary support for the execution of ProCom components on top of the real-time operating system FreeRTOS. ProCom is a component model for embedded systems developed at Mälardalen University. The primary problem is finding an appropriate balance between the level of abstraction and thoughtful utilization of system resources in embedded devices. The defined target platform has limitations in comparison to ...

  19. A Cloud-Based Car Parking Middleware for IoT-Based Smart Cities: Design and Implementation

    Directory of Open Access Journals (Sweden)

    Zhanlin Ji

    2014-11-01

    Full Text Available This paper presents the generic concept of using cloud-based intelligent car parking services in smart cities as an important application of the Internet of Things (IoT paradigm. This type of services will become an integral part of a generic IoT operational platform for smart cities due to its pure business-oriented features. A high-level view of the proposed middleware is outlined and the corresponding operational platform is illustrated. To demonstrate the provision of car parking services, based on the proposed middleware, a cloud-based intelligent car parking system for use within a university campus is described along with details of its design, implementation, and operation. A number of software solutions, including Kafka/Storm/Hbase clusters, OSGi web applications with distributed NoSQL, a rule engine, and mobile applications, are proposed to provide ‘best’ car parking service experience to mobile users, following the Always Best Connected and best Served (ABC&S paradigm.

  20. PAQ: Persistent Adaptive Query Middleware for Dynamic Environments

    Science.gov (United States)

    Rajamani, Vasanth; Julien, Christine; Payton, Jamie; Roman, Gruia-Catalin

    Pervasive computing applications often entail continuous monitoring tasks, issuing persistent queries that return continuously updated views of the operational environment. We present PAQ, a middleware that supports applications' needs by approximating a persistent query as a sequence of one-time queries. PAQ introduces an integration strategy abstraction that allows composition of one-time query responses into streams representing sophisticated spatio-temporal phenomena of interest. A distinguishing feature of our middleware is the realization that the suitability of a persistent query's result is a function of the application's tolerance for accuracy weighed against the associated overhead costs. In PAQ, programmers can specify an inquiry strategy that dictates how information is gathered. Since network dynamics impact the suitability of a particular inquiry strategy, PAQ associates an introspection strategy with a persistent query, that evaluates the quality of the query's results. The result of introspection can trigger application-defined adaptation strategies that alter the nature of the query. PAQ's simple API makes developing adaptive querying systems easily realizable. We present the key abstractions, describe their implementations, and demonstrate the middleware's usefulness through application examples and evaluation.

  1. An Attack-Resilient Middleware Architecture for Grid Integration of Distributed Energy Resources

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Yifu; Mendis, Gihan J.; He, Youbiao; Wei, Jin; Hodge, Bri-Mathias

    2017-05-04

    In recent years, the increasing penetration of Distributed Energy Resources (DERs) has made an impact on the operation of the electric power systems. In the grid integration of DERs, data acquisition systems and communications infrastructure are crucial technologies to maintain system economic efficiency and reliability. Since most of these generators are relatively small, dedicated communications investments for every generator are capital cost prohibitive. Combining real-time attack-resilient communications middleware with Internet of Things (IoTs) technologies allows for the use of existing infrastructure. In our paper, we propose an intelligent communication middleware that utilizes the Quality of Experience (QoE) metrics to complement the conventional Quality of Service (QoS) evaluation. Furthermore, our middleware employs deep learning techniques to detect and defend against congestion attacks. The simulation results illustrate the efficiency of our proposed communications middleware architecture.

  2. Smart grid communication comparison: Distributed control middleware and serialization comparison for the Internet of Things

    DEFF Research Database (Denmark)

    Petersen, Bo Søborg; Bindner, Henrik W.; Poulsen, Bjarne

    2017-01-01

    To solve the problems caused by intermittent renewable energy production, communication between Distributed Energy Resources (DERs) and system operators is necessary. The communication middleware and serialization used for communication are essential to ensure delivery of the messages within...... communication middleware and serialization format/library, aided by the authors' earlier work, which investigates the performance and characteristics of communication middleware and serialization independently. Given the performance criteria of the paper, ZeroMQ, YAMI4, and ICE are the middleware that performs...

  3. Toward a Comprehensive Framework for Evaluating the Core Integration Features of Enterprise Integration Middleware Technologies

    Directory of Open Access Journals (Sweden)

    Hossein Moradi

    2013-01-01

    Full Text Available To achieve greater automation of their business processes, organizations face the challenge of integrating disparate systems. In attempting to overcome this problem, organizations are turning to different kinds of enterprise integration. Implementing enterprise integration is a complex task involving both technological and business challenges and requires appropriate middleware technologies. Different enterprise integration solutions provide various functions and features which lead to the complexity of their evaluation process. To overcome this complexity, appropriate tools for evaluating the core integration features of enterprise integration solutions is required. This paper proposes a new comprehensive framework for evaluating the core integration features of both intra-enterprise and inter-enterprise Integration's enabling technologies, which simplify the process of evaluating the requirements met by enterprise integration middleware technologies.The proposed framework for evaluating the core integration features of enterprise integration middleware technologies was enhanced using the structural and conceptual aspects of previous frameworks. It offers a new schema for which various enterprise integration middleware technologies are categorized in different classifications and are evaluated based on their supporting level for the core integration features' criteria. These criteria include the functional and supporting features. The proposed framework, which is a revised version of our previous framework in this area, has developed the scope, structure and content of the mentioned framework.

  4. Interoperability of remote handling control system software modules at Divertor Test Platform 2 using middleware

    Energy Technology Data Exchange (ETDEWEB)

    Tuominen, Janne, E-mail: janne.m.tuominen@tut.fi [Tampere University of Technology, Department of Intelligent Hydraulics and Automation, Tampere (Finland); Rasi, Teemu; Mattila, Jouni [Tampere University of Technology, Department of Intelligent Hydraulics and Automation, Tampere (Finland); Siuko, Mikko [VTT, Technical Research Centre of Finland, Tampere (Finland); Esque, Salvador [F4E, Fusion for Energy, Torres Diagonal Litoral B3, Josep Pla2, 08019, Barcelona (Spain); Hamilton, David [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France)

    2013-10-15

    Highlights: ► The prototype DTP2 remote handling control system is a heterogeneous collection of subsystems, each realizing a functional area of responsibility. ► Middleware provides well-known, reusable solutions to problems, such as heterogeneity, interoperability, security and dependability. ► A middleware solution was selected and integrated with the DTP2 RH control system. The middleware was successfully used to integrate all relevant subsystems and functionality was demonstrated. -- Abstract: This paper focuses on the inter-subsystem communication channels in a prototype distributed remote handling control system at Divertor Test Platform 2 (DTP2). The subsystems are responsible for specific tasks and, over the years, their development has been carried out using various platforms and programming languages. The communication channels between subsystems have different priorities, e.g. very high messaging rate and deterministic timing or high reliability in terms of individual messages. Generally, a control system's communication infrastructure should provide interoperability, scalability, performance and maintainability. An attractive approach to accomplish this is to use a standardized and proven middleware implementation. The selection of a middleware can have a major cost impact in future integration efforts. In this paper we present development done at DTP2 using the Object Management Group's (OMG) standard specification for Data Distribution Service (DDS) for ensuring communications interoperability. DDS has gained a stable foothold especially in the military field. It lacks a centralized broker, thereby avoiding a single-point-of-failure. It also includes an extensive set of Quality of Service (QoS) policies. The standard defines a platform- and programming language independent model and an interoperability wire protocol that enables DDS vendor interoperability, allowing software developers to avoid vendor lock-in situations.

  5. Reconfiguration Service for Publish/Subscribe Middleware

    NARCIS (Netherlands)

    Zieba, Bogumil; Glandrup, Maurice; van Sinderen, Marten J.; Wegdam, M.

    2006-01-01

    Mission-critical, distributed systems are often designed as a set of distributed, components that interact using publish/subscribe middleware. Currently, in these systems, software components are usually statically allocated to the nodes to fulfil predictability, reliability requirements. However, a

  6. A middleware-based platform for the integration of bioinformatic services

    Directory of Open Access Journals (Sweden)

    Guzmán Llambías

    2015-08-01

    Full Text Available Performing Bioinformatic´s experiments involve an intensive access to distributed services and information resources through Internet. Although existing tools facilitate the implementation of workflow-oriented applications, they lack of capabilities to integrate services beyond low-scale applications, particularly integrating services with heterogeneous interaction patterns and in a larger scale. This is particularly required to enable a large-scale distributed processing of biological data generated by massive sequencing technologies. On the other hand, such integration mechanisms are provided by middleware products like Enterprise Service Buses (ESB, which enable to integrate distributed systems following a Service Oriented Architecture. This paper proposes an integration platform, based on enterprise middleware, to integrate Bioinformatics services. It presents a multi-level reference architecture and focuses on ESB-based mechanisms to provide asynchronous communications, event-based interactions and data transformation capabilities. The paper presents a formal specification of the platform using the Event-B model.

  7. Managing RFID Sensors Networks with a General Purpose RFID Middleware

    Directory of Open Access Journals (Sweden)

    Enrique Valero

    2012-06-01

    Full Text Available RFID middleware is anticipated to one of the main research areas in the field of RFID applications in the near future. The Data EPC Acquisition System (DEPCAS is an original proposal designed by our group to transfer and apply fundamental ideas from System and Data Acquisition (SCADA systems into the areas of RFID acquisition, processing and distribution systems. In this paper we focus on how to organize and manage generic RFID sensors (edge readers, readers, PLCs, etc… inside the DEPCAS middleware. We denote by RFID Sensors Networks Management (RSNM this part of DEPCAS, which is built on top of two new concepts introduced and developed in this work: MARC (Minimum Access Reader Command and RRTL (RFID Reader Topology Language. MARC is an abstraction layer used to hide heterogeneous devices inside a homogeneous acquisition network. RRTL is a language to define RFID Reader networks and to describe the relationship between them (concentrator, peer to peer, master/submaster.

  8. Middleware Challenges for Cyber-Physical Systems

    DEFF Research Database (Denmark)

    Mohamed, Nader; Al-Jaroodi, Jameela; Lazarova-Molnar, Sanja

    2017-01-01

    enhancements for improving physical processes, the development of such complex systems composed of many distributed and heterogeneous components is extremely difficult. This is due to the many communication, computing, and networking challenges. Using an appropriate middleware that provides a framework...

  9. A Data Processing Middleware Based on SOA for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Feng Wang

    2015-01-01

    Full Text Available The Internet of Things (IoT emphasizes on connecting every object around us by leveraging a variety of wireless communication technologies. Heterogeneous data fusion is widely considered to be a promising and urgent challenge in the data processing of the IoT. In this study, we first discuss the development of the concept of the IoT and give a detailed description of the architecture of the IoT. And then we design a middleware platform based on service-oriented architecture (SOA for integration of multisource heterogeneous information. New research angle regarding flexible heterogeneous information fusion architecture for the IoT is the theme of this paper. Experiments using environmental monitoring sensor data derived from indoor environment are performed for system validation. Through the theoretical analysis and experimental verification, the data processing middleware architecture represents better adaptation to multisensor and multistream application scenarios in the IoT, which improves heterogeneous data utilization value. The data processing middleware based on SOA for the IoT establishes a solid foundation of integration and interaction for diverse networks data among heterogeneous systems in the future, which simplifies the complexity of integration process and improves reusability of components in the system.

  10. Synergy between Software Product Line and Intelligent Mobile Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius

    2007-01-01

    with OWL ontology reasoning enhanced BDI (belief-desire-intention) agents in an ongoing research project called PLIMM (product line enabled intelligent mobile middleware), in which Frame based software product line techniques are applied. Besides the advantages of a software product line, our approach can...... handle ontology evolution and keep all related assets in a consistent state. Ontology evolution is a problem that has not been addressed by current mobile middleware. Another advantage is the ability to configure Jadex BDI agents for different purpose and enhance agent intelligence by adding logic...

  11. Evaluating ITER remote handling middleware concepts

    NARCIS (Netherlands)

    Koning, J. F.; Heemskerk, C. J. M.; Schoen, P.; Smedinga, D.; Boode, A. H.; Hamilton, D. T.

    2013-01-01

    Remote maintenance activities in ITER will be performed by a unique set of hardware systems, supported by an extensive software kit. A layer of middleware will manage and control a complex set of interconnections between teams of operators, hardware devices in various operating theatres, and

  12. Context-Aware Middleware for Pervasive Elderly Homecare

    DEFF Research Database (Denmark)

    Pung, Hung Keng; Gu, Tao; Xue, Wenwei

    2009-01-01

    The growing aging population faces a number of challenges, including rising medical cost, inadequate number of medical doctors and healthcare professionals, as well as higher incidence of misdiagnosis. There is an increasing demand for a better healthcare support for the elderly and one promising......-aware service management. It can be used to support the development and deployment of various homecare services for the elderly such as patient monitoring, location-based emergency response, anomalous daily activity detection, pervasive access to medical data and social networking. We have developed a prototype...... of the middleware and demonstrated the concept of providing a continuing-care to an elderly with the collaborative interactions spanning multiple physical spaces: person, home, office and clinic. The results of the prototype show that our middleware approach achieves good efficiency of context query processing...

  13. Distributed Data Service for Data Management in Internet of Things Middleware

    Directory of Open Access Journals (Sweden)

    Ruben Cruz Huacarpuma

    2017-04-01

    Full Text Available The development of the Internet of Things (IoT is closely related to a considerable increase in the number and variety of devices connected to the Internet. Sensors have become a regular component of our environment, as well as smart phones and other devices that continuously collect data about our lives even without our intervention. With such connected devices, a broad range of applications has been developed and deployed, including those dealing with massive volumes of data. In this paper, we introduce a Distributed Data Service (DDS to collect and process data for IoT environments. One central goal of this DDS is to enable multiple and distinct IoT middleware systems to share common data services from a loosely-coupled provider. In this context, we propose a new specification of functionalities for a DDS and the conception of the corresponding techniques for collecting, filtering and storing data conveniently and efficiently in this environment. Another contribution is a data aggregation component that is proposed to support efficient real-time data querying. To validate its data collecting and querying functionalities and performance, the proposed DDS is evaluated in two case studies regarding a simulated smart home system, the first case devoted to evaluating data collection and aggregation when the DDS is interacting with the UIoT middleware, and the second aimed at comparing the DDS data collection with this same functionality implemented within the Kaa middleware.

  14. Design of a Mobile Agent-Based Adaptive Communication Middleware for Federations of Critical Infrastructure Simulations

    Science.gov (United States)

    Görbil, Gökçe; Gelenbe, Erol

    The simulation of critical infrastructures (CI) can involve the use of diverse domain specific simulators that run on geographically distant sites. These diverse simulators must then be coordinated to run concurrently in order to evaluate the performance of critical infrastructures which influence each other, especially in emergency or resource-critical situations. We therefore describe the design of an adaptive communication middleware that provides reliable and real-time one-to-one and group communications for federations of CI simulators over a wide-area network (WAN). The proposed middleware is composed of mobile agent-based peer-to-peer (P2P) overlays, called virtual networks (VNets), to enable resilient, adaptive and real-time communications over unreliable and dynamic physical networks (PNets). The autonomous software agents comprising the communication middleware monitor their performance and the underlying PNet, and dynamically adapt the P2P overlay and migrate over the PNet in order to optimize communications according to the requirements of the federation and the current conditions of the PNet. Reliable communications is provided via redundancy within the communication middleware and intelligent migration of agents over the PNet. The proposed middleware integrates security methods in order to protect the communication infrastructure against attacks and provide privacy and anonymity to the participants of the federation. Experiments with an initial version of the communication middleware over a real-life networking testbed show that promising improvements can be obtained for unicast and group communications via the agent migration capability of our middleware.

  15. Middleware Platform Management based on Portable Interceptors

    NARCIS (Netherlands)

    Kath, O.; van Halteren, Aart; Stoinski, F.; Wegdam, M.; Fisher, M.; Ambler, A.; Calo, S.B.; Kar, G.

    2000-01-01

    Object middleware is an enabling technology for distributed applications that are required to operate in heterogeneous computing and communication environments. Although hiding distribution aspects to application designers proves beneficial, in an operational environment system managers may need

  16. Semantic Agent-Based Service Middleware and Simulation for Smart Cities

    Directory of Open Access Journals (Sweden)

    Ming Liu

    2016-12-01

    Full Text Available With the development of Machine-to-Machine (M2M technology, a variety of embedded and mobile devices is integrated to interact via the platform of the Internet of Things, especially in the domain of smart cities. One of the primary challenges is that selecting the appropriate services or service combination for upper layer applications is hard, which is due to the absence of a unified semantical service description pattern, as well as the service selection mechanism. In this paper, we define a semantic service representation model from four key properties: Capability (C, Deployment (D, Resource (R and IOData (IO. Based on this model, an agent-based middleware is built to support semantic service enablement. In this middleware, we present an efficient semantic service discovery and matching approach for a service combination process, which calculates the semantic similarity between services, and a heuristic algorithm to search the service candidates for a specific service request. Based on this design, we propose a simulation of virtual urban fire fighting, and the experimental results manifest the feasibility and efficiency of our design.

  17. Supporting Seamful Development of Positioning Applications through Model Based Translucent Middleware

    DEFF Research Database (Denmark)

    Jensen, Jakob Langdal

    middleware, and how that middleware can provide developers with methods for controlling application qualities that are related to the positioning process. One key challenge is to understand how to support application development in a heterogeneous domain like that of positioning. Recent trends in application...... middleware used to support application development. We transfer the concept of tactics from the field of software architecture and apply it to specific qualities related to position based applications. We further argue that many of these tactics can be implemented as policies that can be enforced......Positioning technologies are becoming ever more pervasive, and they are used for a growing number of applications in a broad range of fields. We aim to support software developers who create position based applications. More specifically, how support can be provided through the use of specialized...

  18. A lightweight high availability strategy for Atlas LCG File Catalogs

    International Nuclear Information System (INIS)

    Martelli, Barbara; Salvo, Alessandro de; Anzellotti, Daniela; Rinaldi, Lorenzo; Cavalli, Alessandro; Pra, Stefano dal; Dell'Agnello, Luca; Gregori, Daniele; Prosperini, Andrea; Ricci, Pier Paolo; Sapunenko, Vladimir

    2010-01-01

    The LCG File Catalog is a key component of the LHC Computing Grid middleware [1], as it contains the mapping between Logical File Names and Physical File Names on the Grid. The Atlas computing model foresees multiple local LFC housed in each Tier-1 and Tier-0, containing all information about files stored in the regional cloud. As the local LFC contents are presently not replicated anywhere, this turns out in a dangerous single point of failure for all of the Atlas regional clouds. In order to solve this problem we propose a novel solution for high availability (HA) of Oracle based Grid services, obtained by composing an Oracle Data Guard deployment and a series of application level scripts. This approach has the advantage of being very easy to deploy and maintain, and represents a good candidate solution for all Tier-2s which are usually little centres with little manpower dedicated to service operations. We also present the results of a wide range of functionality and performance tests run on a test-bed having characteristics similar to the ones required for production. The test-bed consists of a failover deployment between the Italian LHC Tier-1 (INFN - CNAF) and an Atlas Tier-2 located at INFN - Roma1. Moreover, we explain how the proposed strategy can be deployed on the present Grid infrastructure, without requiring any change to the middleware and in a way that is totally transparent to end users and applications.

  19. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  20. Middleware Reflexivo para la gestión de Aprendizajes Conectivistas en Ecologías de Conocimientos (eco-conectivismo

    Directory of Open Access Journals (Sweden)

    Jose Aguilar

    2015-11-01

    Full Text Available En este artículo se propone la arquitectura de un Middleware Reflexivo basado en computación autonómica, cuyo objetivo es gestionar un ambiente conectivista de aprendizaje, modelado bajo el paradigma de las ecologías del conocimiento. El Middleware es capaz de monitorear el ambiente que consiste de un conjunto de Entornos Personales de Aprendizaje que son percibidos como objetos auto-organizados que forman ecosistemas. La evolución del proceso de aprendizaje depende del análisis del comportamiento Web de los aprendices, y de un esquema de supervivencia ecológica que promueve las relaciones sociales, diversidad y tolerancia en un dominio de conocimiento socializado. El middleware utiliza minería web de uso para caracterizar el comportamiento del aprendiz, técnicas de agrupamiento para los ecosistemas de aprendizaje, y un sistema recomendador cognitivo-colaborativo para el proceso de autoadaptación de las estrategias de aprendizaje.

  1. Template Driven Code Generator for HLA Middleware

    NARCIS (Netherlands)

    Jansen, R.E.J.; Prins, L.M.; Huiskamp, W.

    2007-01-01

    HLA is the accepted standard for simulation interoperability. However, the HLA services and the API that is provided for these services are relatively complex from the user point of view. Since the early days of HLA, federate developers have attempted to simplify their task by using middleware that

  2. Towards Service-Oriented Middleware for Fog and Cloud Integrated Cyber Physical Systems

    DEFF Research Database (Denmark)

    Mohamed, Nader; Lazarova-Molnar, Sanja; Jawhar, Imad

    2017-01-01

    enables the integration of CPS with other systems such as Cloud and Fog Computing. Furthermore, as CPS can be developed for various applications at different scales, this paper provides a classification for CPS applications and discusses how CPSWare can effectively deal with the different issues in each...... of the applications. An appropriate middleware is needed to provide infrastructural support and assist the development and operations of diverse CPS applications. This paper studies utilizing the service-oriented middleware (SOM) approach for CPS and discusses the advantages and requirements for such utilization....... In addition, it proposes an SOM for CPS, called CPSWare. This middleware views all CPS components as a set of services and provides a service-based infrastructure to develop and operate CPS applications. This approach provides systemic solutions for solving many computing and networking issues in CPS. It also...

  3. Technologies, Development Tools, and Patterns for Automatic Generation and Customization of Adaptable Distributed Real-Time and Embedded (DRE) Middleware

    National Research Council Canada - National Science Library

    Hatcliff, John; Dwyer, Matthew; Mizuno, Masaaki; Singh, Gurdip; Daugherty, Gary

    2005-01-01

    .... PCES work has shown how model-integrated computing and adaptive and flexible middleware frameworks can be applied for defining, analyzing, generating, and customizing large-scale high-assurance, high...

  4. The design and implementation of multi-source application middleware based on service bus

    Science.gov (United States)

    Li, Yichun; Jiang, Ningkang

    2017-06-01

    With the rapid development of the Internet of Things(IoT), the real-time monitoring data are increasing with different types and large amounts. Aiming at taking full advantages of the data, we designed and implemented an application middleware, which not only supports the three-layer architecture of IoT information system but also enables the flexible configuration of multiple resources access and other accessional modules. The middleware platform shows the characteristics of lightness, security, AoP (aspect-oriented programming), distribution and real-time, which can let application developers construct the information processing systems on related areas in a short period. It focuses not limited to these functions: pre-processing of data format, the definition of data entity, the callings and handlings of distributed service and massive data process. The result of experiment shows that the performance of middleware is more excellent than some message queue construction to some degree and its throughput grows better as the number of distributed nodes increases while the code is not complex. Currently, the middleware is applied to the system of Shanghai Pudong environmental protection agency and achieved a great success.

  5. Laboratory Automation and Middleware.

    Science.gov (United States)

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. FTT-MA: A Flexible Time-Triggered Middleware Architecture for Time Sensitive, Resource-Aware AmI Systems

    Directory of Open Access Journals (Sweden)

    Federico Pérez

    2013-05-01

    Full Text Available There is an increasing number of Ambient Intelligence (AmI systems that are time-sensitive and resource-aware. From healthcare to building and even home/office automation, it is now common to find systems combining interactive and sensing multimedia traffic with relatively simple sensors and actuators (door locks, presence detectors, RFIDs, HVAC, information panels, etc.. Many of these are today known as Cyber-Physical Systems (CPS. Quite frequently, these systems must be capable of (1 prioritizing different traffic flows (process data, alarms, non-critical data, etc., (2 synchronizing actions in several distributed devices and, to certain degree, (3 easing resource management (e.g., detecting faulty nodes, managing battery levels, handling overloads, etc.. This work presents FTT-MA, a high-level middleware architecture aimed at easing the design, deployment and operation of such AmI systems. FTT-MA ensures that both functional and non-functional aspects of the applications are met even during reconfiguration stages. The paper also proposes a methodology, together with a design tool, to create this kind of systems. Finally, a sample case study is presented that illustrates the use of the middleware and the methodology proposed in the paper.

  7. Middleware and Web Services for the Collaborative Information Portal of NASA's Mars Exploration Rovers Mission

    Science.gov (United States)

    Sinderson, Elias; Magapu, Vish; Mak, Ronald

    2004-01-01

    We describe the design and deployment of the middleware for the Collaborative Information Portal (CIP), a mission critical J2EE application developed for NASA's 2003 Mars Exploration Rover mission. CIP enabled mission personnel to access data and images sent back from Mars, staff and event schedules, broadcast messages and clocks displaying various Earth and Mars time zones. We developed the CIP middleware in less than two years time usins cutting-edge technologies, including EJBs, servlets, JDBC, JNDI and JMS. The middleware was designed as a collection of independent, hot-deployable web services, providing secure access to back end file systems and databases. Throughout the middleware we enabled crosscutting capabilities such as runtime service configuration, security, logging and remote monitoring. This paper presents our approach to mitigating the challenges we faced, concluding with a review of the lessons we learned from this project and noting what we'd do differently and why.

  8. Middleware enabling computational self-reflection: exploring the need for and some costs of selfreflecting networks with application to homeland defense

    Science.gov (United States)

    Kramer, Michael J.; Bellman, Kirstie L.; Landauer, Christopher

    2002-07-01

    This paper will review and examine the definitions of Self-Reflection and Active Middleware. Then it will illustrate a conceptual framework for understanding and enumerating the costs of Self-Reflection and Active Middleware at increasing levels of Application. Then it will review some application of Self-Reflection and Active Middleware to simulations. Finally it will consider the application and additional kinds of costs applying Self-Reflection and Active Middleware to sharing information among the organizations expected to participate in Homeland Defense.

  9. Use of Open Architecture Middleware for Autonomous Platforms

    Science.gov (United States)

    Naranjo, Hector; Diez, Sergio; Ferrero, Francisco

    2011-08-01

    Network Enabled Capabilities (NEC) is the vision for next-generation systems in the defence domain formulated by governments, the European Defence Agency (EDA) and the North Atlantic Treaty Organization (NATO). It involves the federation of military information systems, rather than just a simple interconnection, to provide each user with the "right information, right place, right time - and not too much". It defines openness, standardization and flexibility principles in military systems, likewise applicable in the civilian space applications.This paper provides the conclusions drawn from "Architecture for Embarked Middleware" (EMWARE) study, funded by the European Defence Agency (EDA).The aim of the EMWARE project was to provide the information and understanding to facilitate the adoption of informed decisions regarding the specification and implementation of Open Architecture Middleware in future distributed systems, linking it with the NEC goal.EMWARE project included the definition of four business cases, each devoted to a different field of application (Unmanned Aerial Vehicles, Helicopters, Unmanned Ground Vehicles and the Satellite Ground Segment).

  10. A Sensor Middleware for integration of heterogeneous medical devices.

    Science.gov (United States)

    Brito, M; Vale, L; Carvalho, P; Henriques, J

    2010-01-01

    In this paper, the architecture of a modular, service-oriented, Sensor Middleware for data acquisition and processing is presented. The described solution was developed with the purpose of solving two increasingly relevant problems in the context of modern pHealth systems: i) to aggregate a number of heterogeneous, off-the-shelf, devices from which clinical measurements can be acquired and ii) to provide access and integration with an 802.15.4 network of wearable sensors. The modular nature of the Middleware provides the means to easily integrate pre-processing algorithms into processing pipelines, as well as new drivers for adding support for new sensor devices or communication technologies. Tests performed with both real and artificially generated data streams show that the presented solution is suitable for use both in a Windows PC or a Windows Mobile PDA with minimal overhead.

  11. Application Requirements for Middleware for Mobile and Pervasive Systems

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Raatikainen, Kimmo; Nakajim, Tatsuo

    2002-01-01

    In this paper, we examine the requirements for future middleware to support mobile and pervasive applications and identify key research areas. We illustrate the research areas with requirements identified in two specific research projects concerning pervasive healthcare and home entertainment....

  12. Research on the Development and Trends of RFID Middleware%RFID中间件发展与趋势研究

    Institute of Scientific and Technical Information of China (English)

    陈阳

    2015-01-01

    This paper describes the concepts and characteristics of RFID middleware, reviews the development of RFID middle-ware, analyzes RFID middleware products that are developed by domestic and foreign research institutions and corporations. This paper also gives an overview of the current research status and development trends of RFID middleware.%本文阐述了RFID中间件的概念、特点,回顾了RFID中间件的发展历程,分析了国内外科研机构及企业研发的RFID中间件产品,阐述了RFID中间件的研究现状与发展趋势。

  13. Smart TV-Smartphone Multiscreen Interactive Middleware for Public Displays

    Directory of Open Access Journals (Sweden)

    Francisco Martinez-Pabon

    2015-01-01

    Full Text Available A new generation of public displays demands high interactive and multiscreen features to enrich people’s experience in new pervasive environments. Traditionally, research on public display interaction has involved mobile devices as the main characters during the use of personal area network technologies such as Bluetooth or NFC. However, the emergent Smart TV model arises as an interesting alternative for the implementation of a new generation of public displays. This is due to its intrinsic connection capabilities with surrounding devices like smartphones or tablets. Nonetheless, the different approaches proposed by the most important vendors are still underdeveloped to support multiscreen and interaction capabilities for modern public displays, because most of them are intended for domestic environments. This research proposes multiscreen interactive middleware for public displays, which was developed from the principles of a loosely coupled interaction model, simplicity, stability, concurrency, low latency, and the usage of open standards and technologies. Moreover, a validation prototype is proposed in one of the most interesting public display scenarios: the advertising.

  14. Demystifying embedded systems middleware understanding file systems, databases, virtual machines, networking and more

    CERN Document Server

    Noergaard, Tammy

    2010-01-01

    This practical technical guide to embedded middleware implementation offers a coherent framework that guides readers through all the key concepts necessary to gain an understanding of this broad topic. Big picture theoretical discussion is integrated with down-to-earth advice on successful real-world use via step-by-step examples of each type of middleware implementation. Technically detailed case studies bring it all together, by providing insight into typical engineering situations readers are likely to encounter. Expert author Tammy Noergaard keeps explanations as simple and readable as

  15. Model-based Translucency in Middleware: Supporting Seamful Development

    DEFF Research Database (Denmark)

    Schougaard, Kari Rye; Langdal, Jakob

    2010-01-01

    Traditionally, extensibility and adaptability in middleware is achieved through thorough design of the problem domain. The key variability points are modeled at design time to allow plugin of new functionality at different points in the system. Unfortunately, it is historically shown that the act...

  16. Mobile Phone Middleware Architecture for Energy and Context Awareness in Location-Based Services

    Directory of Open Access Journals (Sweden)

    Hiram Galeana-Zapién

    2014-12-01

    Full Text Available The disruptive innovation of smartphone technology has enabled the development of mobile sensing applications leveraged on specialized sensors embedded in the device. These novel mobile phone applications rely on advanced sensor information processes, which mainly involve raw data acquisition, feature extraction, data interpretation and transmission. However, the continuous accessing of sensing resources to acquire sensor data in smartphones is still very expensive in terms of energy, particularly due to the periodic use of power-intensive sensors, such as the Global Positioning System (GPS receiver. The key underlying idea to design energy-efficient schemes is to control the duty cycle of the GPS receiver. However, adapting the sensing rate based on dynamic context changes through a flexible middleware has received little attention in the literature. In this paper, we propose a novel modular middleware architecture and runtime environment to directly interface with application programming interfaces (APIs and embedded sensors in order to manage the duty cycle process based on energy and context aspects. The proposed solution has been implemented in the Android software stack. It allows continuous location tracking in a timely manner and in a transparent way to the user. It also enables the deployment of sensing policies to appropriately control the sampling rate based on both energy and perceived context. We validate the proposed solution taking into account a reference location-based service (LBS architecture. A cloud-based storage service along with online mobility analysis tools have been used to store and access sensed data. Experimental measurements demonstrate the feasibility and efficiency of our middleware, in terms of energy and location resolution.

  17. Grid Interoperation with ARC Middleware for the CMS Experiment

    CERN Document Server

    Edelmann, Erik; Frey, Jaime; Gronager, Michael; Happonen, Kalle; Johansson, Daniel; Kleist, Josva; Klem, Jukka; Koivumaki, Jesper; Linden, Tomas; Pirinen, Antti; Qing, Di

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developi...

  18. A microkernel middleware architecture for distributed embedded real-zime systems

    OpenAIRE

    Pfeffer, Matthias

    2001-01-01

    A microkernel middleware architecture for distributed embedded real-zime systems / T. Ungerer ... - In: Symposium on Reliable Distributed Systems : Proceedings : October 28 - 31, 2001, New Orleans, Louisiana, USA. - Los Alamitos, Calif. [u.a.] : IEEE Computer Soc., 2001. - S. 218-226

  19. Enterprise Middleware for Scientific Data

    Energy Technology Data Exchange (ETDEWEB)

    Thomson, Judi; Chappell, Alan R.; Almquist, Justin P.

    2003-02-27

    We describe an enterprise middleware system that integrates, from a user’s perspective, data located on disparate data storage devices without imposing additional requirements upon those storage mechanisms. The system provides advanced search capabilities by exploiting a repository of metadata that describes the integrated data. This search mechanism integrates information from a collection of XML metadata documents with diverse schema. Users construct queries using familiar search terms, and the enterprise system uses domain representations and vocabulary mappings to translate the user’s query, expanding the search to include other potentially relevant data. The enterprise architecture allows flexibility with respect to domain dependent processing of user data and metadata

  20. Product Line Enabled Intelligent Mobile Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Kunz, Thomas; Hansen, Klaus Marius

    2007-01-01

    research project called PLIMM that focuses on user-centered application scenarios. PLIMM is designed based on software product line ideas which make it possible for specialized customization and optimization for different purposes and hardware/software platforms. To enable intelligence, the middleware...... needs access to a range of context models. We model these contexts with OWL, focusing on user-centered concepts. The basic building block of PLIMM is the enhanced BDI agent where OWL context ontology logic reasoning will add indirect beliefs to the belief sets. Our approach also addresses the handling...

  1. Executable Design Models for a Pervasive Healthcare Middleware System

    DEFF Research Database (Denmark)

    Jørgensen, Jens Bæk; Christensen, Søren

    2002-01-01

     UML is applied in the design of a pervasive healthcare middleware system for the hospitals in Aarhus County, Denmark. It works well for the modelling of static aspects of the system, but with respect to describing the behaviour, UML is not sufficient. This paper explains why and, as a remedy, su...

  2. A Cloud-Based Car Parking Middleware for IoT-Based Smart Cities: Design and Implementation

    Science.gov (United States)

    Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhao, Li; Zhang, Xueji

    2014-01-01

    This paper presents the generic concept of using cloud-based intelligent car parking services in smart cities as an important application of the Internet of Things (IoT) paradigm. This type of services will become an integral part of a generic IoT operational platform for smart cities due to its pure business-oriented features. A high-level view of the proposed middleware is outlined and the corresponding operational platform is illustrated. To demonstrate the provision of car parking services, based on the proposed middleware, a cloud-based intelligent car parking system for use within a university campus is described along with details of its design, implementation, and operation. A number of software solutions, including Kafka/Storm/Hbase clusters, OSGi web applications with distributed NoSQL, a rule engine, and mobile applications, are proposed to provide ‘best’ car parking service experience to mobile users, following the Always Best Connected and best Served (ABC&S) paradigm. PMID:25429416

  3. A cloud-based car parking middleware for IoT-based smart cities: design and implementation.

    Science.gov (United States)

    Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhao, Li; Zhang, Xueji

    2014-11-25

    This paper presents the generic concept of using cloud-based intelligent car parking services in smart cities as an important application of the Internet of Things (IoT) paradigm. This type of services will become an integral part of a generic IoT operational platform for smart cities due to its pure business-oriented features. A high-level view of the proposed middleware is outlined and the corresponding operational platform is illustrated. To demonstrate the provision of car parking services, based on the proposed middleware, a cloud-based intelligent car parking system for use within a university campus is described along with details of its design, implementation, and operation. A number of software solutions, including Kafka/Storm/Hbase clusters, OSGi web applications with distributed NoSQL, a rule engine, and mobile applications, are proposed to provide 'best' car parking service experience to mobile users, following the Always Best Connected and best Served (ABC&S) paradigm.

  4. A collaborative network middleware project by Lambda Station, TeraPaths, and Phoebus

    International Nuclear Information System (INIS)

    Bobyshev, A.; Bradley, S.; Crawford, M.; DeMar, P.; Katramatos, D.; Shroff, K.; Swany, M.; Yu, D.

    2010-01-01

    The TeraPaths, Lambda Station, and Phoebus projects, funded by the US Department of Energy, have successfully developed network middleware services that establish on-demand and manage true end-to-end, Quality-of-Service (QoS) aware, virtual network paths across multiple administrative network domains, select network paths and gracefully reroute traffic over these dynamic paths, and streamline traffic between packet and circuit networks using transparent gateways. These services improve network QoS and performance for applications, playing a critical role in the effective use of emerging dynamic circuit network services. They provide interfaces to applications, such as dCache SRM, translate network service requests into network device configurations, and coordinate with each other to setup up end-to-end network paths. The End Site Control Plane Subsystem (ESCPS) builds upon the success of the three projects by combining their individual capabilities into the next generation of network middleware. ESCPS addresses challenges such as cross-domain control plane signalling and interoperability, authentication and authorization in a Grid environment, topology discovery, and dynamic status tracking. The new network middleware will take full advantage of the perfSONAR monitoring infrastructure and the Inter-Domain Control plane efforts and will be deployed and fully vetted in the Large Hadron Collider data movement environment.

  5. Grid Interoperation with ARC middleware for the CMS experiment

    International Nuclear Information System (INIS)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva; Field, Laurence; Qing, Di; Frey, Jaime; Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  6. Grid Interoperation with ARC middleware for the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva [Nordic DataGrid Facility, Kastruplundgade 22, 1., DK-2770 Kastrup (Denmark); Field, Laurence; Qing, Di [CERN, CH-1211 Geneve 23 (Switzerland); Frey, Jaime [University of Wisconsin-Madison, 1210 W. Dayton St., Madison, WI (United States); Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti, E-mail: Jukka.Klem@cern.c [Helsinki Institute of Physics, PO Box 64, FIN-00014 University of Helsinki (Finland)

    2010-04-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  7. Design and implementation of distributed multimedia surveillance system based on object-oriented middleware

    Science.gov (United States)

    Cao, Xuesong; Jiang, Ling; Hu, Ruimin

    2006-10-01

    Currently, the applications of surveillance system have been increasingly widespread. But there are few surveillance platforms that can meet the requirement of large-scale, cross-regional, and flexible surveillance business. In the paper, we present a distributed surveillance system platform to improve safety and security of the society. The system is constructed by an object-oriented middleware called as Internet Communications Engine (ICE). This middleware helps our platform to integrate a lot of surveillance resource of the society and accommodate diverse range of surveillance industry requirements. In the follow sections, we will describe in detail the design concepts of system and introduce traits of ICE.

  8. Development of Middleware Applied to Microgrids by Means of an Open Source Enterprise Service Bus

    Directory of Open Access Journals (Sweden)

    Jesús Rodríguez-Molina

    2017-02-01

    Full Text Available The success of the smart grid relies heavily on the integration of Distributed Energy Resources (DERs and interoperability among the hardware elements that are present as part of either the smart grid itself or in a smaller size deployment, such as a microgrid. Therefore, establishing an accurate design for software architectures that guarantee interoperability and are able to abstract hardware heterogeneity in this application domain, along with a clearly defined procedure on how to implement and test a solution like this, becomes a desirable objective. This paper describes the requirements needed to design a secure, decentralized and semantic middleware architecture for microgrids and the procedures used to develop it, so that the mandatory software components that have to be encased by the solution, as well as the steps that should be followed to make it happen, become clear for any designer, software architect or programmer that has to tackle similar challenges. In order to demonstrate the usability of the ideas put forward here, two successful pilots where middleware solutions were created according to these principles have been described.

  9. Managing RFID Sensors Networks with a General Purpose RFID Middleware

    Science.gov (United States)

    Abad, Ismael; Cerrada, Carlos; Cerrada, Jose A.; Heradio, Rubén; Valero, Enrique

    2012-01-01

    RFID middleware is anticipated to one of the main research areas in the field of RFID applications in the near future. The Data EPC Acquisition System (DEPCAS) is an original proposal designed by our group to transfer and apply fundamental ideas from System and Data Acquisition (SCADA) systems into the areas of RFID acquisition, processing and distribution systems. In this paper we focus on how to organize and manage generic RFID sensors (edge readers, readers, PLCs, etc…) inside the DEPCAS middleware. We denote by RFID Sensors Networks Management (RSNM) this part of DEPCAS, which is built on top of two new concepts introduced and developed in this work: MARC (Minimum Access Reader Command) and RRTL (RFID Reader Topology Language). MARC is an abstraction layer used to hide heterogeneous devices inside a homogeneous acquisition network. RRTL is a language to define RFID Reader networks and to describe the relationship between them (concentrator, peer to peer, master/submaster). PMID:22969370

  10. Distributed Data Service for Data Management in Internet of Things Middleware

    Science.gov (United States)

    Cruz Huacarpuma, Ruben; de Sousa Junior, Rafael Timoteo; de Holanda, Maristela Terto; de Oliveira Albuquerque, Robson; García Villalba, Luis Javier; Kim, Tai-Hoon

    2017-01-01

    The development of the Internet of Things (IoT) is closely related to a considerable increase in the number and variety of devices connected to the Internet. Sensors have become a regular component of our environment, as well as smart phones and other devices that continuously collect data about our lives even without our intervention. With such connected devices, a broad range of applications has been developed and deployed, including those dealing with massive volumes of data. In this paper, we introduce a Distributed Data Service (DDS) to collect and process data for IoT environments. One central goal of this DDS is to enable multiple and distinct IoT middleware systems to share common data services from a loosely-coupled provider. In this context, we propose a new specification of functionalities for a DDS and the conception of the corresponding techniques for collecting, filtering and storing data conveniently and efficiently in this environment. Another contribution is a data aggregation component that is proposed to support efficient real-time data querying. To validate its data collecting and querying functionalities and performance, the proposed DDS is evaluated in two case studies regarding a simulated smart home system, the first case devoted to evaluating data collection and aggregation when the DDS is interacting with the UIoT middleware, and the second aimed at comparing the DDS data collection with this same functionality implemented within the Kaa middleware. PMID:28448469

  11. Distributed Data Service for Data Management in Internet of Things Middleware.

    Science.gov (United States)

    Cruz Huacarpuma, Ruben; de Sousa Junior, Rafael Timoteo; de Holanda, Maristela Terto; de Oliveira Albuquerque, Robson; García Villalba, Luis Javier; Kim, Tai-Hoon

    2017-04-27

    The development of the Internet of Things (IoT) is closely related to a considerable increase in the number and variety of devices connected to the Internet. Sensors have become a regular component of our environment, as well as smart phones and other devices that continuously collect data about our lives even without our intervention. With such connected devices, a broad range of applications has been developed and deployed, including those dealing with massive volumes of data. In this paper, we introduce a Distributed Data Service (DDS) to collect and process data for IoT environments. One central goal of this DDS is to enable multiple and distinct IoT middleware systems to share common data services from a loosely-coupled provider. In this context, we propose a new specification of functionalities for a DDS and the conception of the corresponding techniques for collecting, filtering and storing data conveniently and efficiently in this environment. Another contribution is a data aggregation component that is proposed to support efficient real-time data querying. To validate its data collecting and querying functionalities and performance, the proposed DDS is evaluated in two case studies regarding a simulated smart home system, the first case devoted to evaluating data collection and aggregation when the DDS is interacting with the UIoT middleware, and the second aimed at comparing the DDS data collection with this same functionality implemented within the Kaa middleware.

  12. Skinware 2.0: A real-time middleware for robot skin

    Directory of Open Access Journals (Sweden)

    S. Youssefi

    2015-12-01

    Full Text Available Robot skins have emerged recently as products of research from various institutes worldwide. Each robot skin is designed with different applications in mind. As a result, they differ in many aspects from transduction technology and structure to communication protocols and timing requirements. These differences create a barrier for researchers interested in developing tactile processing algorithms for robots using the sense of touch; supporting multiple robot skin technologies is non-trivial and committing to a single technology is not as useful, especially as the field is still in its infancy. The Skinware middleware has been created to mitigate these issues by providing abstractions and real-time acquisition mechanisms. This article describes the second revision of Skinware, discussing the differences with respect to the first version.

  13. Middleware solutions for the Internet of Things

    CERN Document Server

    Delicato, Flávia C; Batista, Thais

    2013-01-01

    After a brief introduction and contextualization on the Internet of Things (IoT) and Web of Things (WoT) paradigms, this timely new book describes one of the first research initiatives aimed at tackling the several challenges involved in building a middleware-layer infrastructure capable of realizing the WoT vision: the SmartSensor infrastructure. It is based on current standardization efforts and designed to manage a specific type of physical devices, those organized to shape a Wireless Sensor Network (WSN), where sensors work collaboratively, extracting data and transmitting it to external n

  14. The Next Generation ARC Middleware and ATLAS Computing Model

    International Nuclear Information System (INIS)

    Filipčič, Andrej; Cameron, David; Konstantinov, Aleksandr; Karpenko, Dmytro; Smirnova, Oxana

    2012-01-01

    The distributed NDGF Tier-1 and associated NorduGrid clusters are well integrated into the ATLAS computing environment but follow a slightly different paradigm than other ATLAS resources. The current paradigm does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS’ global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new services for job control and data transfer. Integration of the ARC core into the EMI middleware provides a natural way to implement the new services using the ARC components

  15. Phoebus: Network Middleware for Next-Generation Network Computing

    Energy Technology Data Exchange (ETDEWEB)

    Martin Swany

    2012-06-16

    The Phoebus project investigated algorithms, protocols, and middleware infrastructure to improve end-to-end performance in high speed, dynamic networks. The Phoebus system essentially serves as an adaptation point for networks with disparate capabilities or provisioning. This adaptation can take a variety of forms including acting as a provisioning agent across multiple signaling domains, providing transport protocol adaptation points, and mapping between distributed resource reservation paradigms and the optical network control plane. We have successfully developed the system and demonstrated benefits. The Phoebus system was deployed in Internet2 and in ESnet, as well as in GEANT2, RNP in Brazil and over international links to Korea and Japan. Phoebus is a system that implements a new protocol and associated forwarding infrastructure for improving throughput in high-speed dynamic networks. It was developed to serve the needs of large DOE applications on high-performance networks. The idea underlying the Phoebus model is to embed Phoebus Gateways (PGs) in the network as on-ramps to dynamic circuit networks. The gateways act as protocol translators that allow legacy applications to use dedicated paths with high performance.

  16. MyHealthAssistant: an event-driven middleware for multiple medical applications on a smartphone-mediated body sensor network.

    Science.gov (United States)

    Seeger, Christian; Van Laerhoven, Kristof; Buchmann, Alejandro

    2015-03-01

    An ever-growing range of wireless sensors for medical monitoring has shown that there is significant interest in monitoring patients in their everyday surroundings. It however remains a challenge to merge information from several wireless sensors and applications are commonly built from scratch. This paper presents a middleware targeted for medical applications on smartphone-like platforms that relies on an event-based design to enable flexible coupling with changing sets of wireless sensor units, while posing only a minor overhead on the resources and battery capacity of the interconnected devices. We illustrate the requirements for such middleware with three different healthcare applications that were deployed with our middleware solution, and characterize the performance with energy consumption, overhead caused for the smartphone, and processing time under real-world circumstances. Results show that with sensing-intensive applications, our solution only minimally impacts the phone's resources, with an added CPU utilization of 3% and a memory usage under 7 MB. Furthermore, for a minimum message delivery ratio of 99.9%, up to 12 sensor readings per second are guaranteed to be handled, regardless of the number of applications using our middleware.

  17. Semantic Agent-Based Service Middleware and Simulation for Smart Cities.

    Science.gov (United States)

    Liu, Ming; Xu, Yang; Hu, Haixiao; Mohammed, Abdul-Wahid

    2016-12-21

    With the development of Machine-to-Machine (M2M) technology, a variety of embedded and mobile devices is integrated to interact via the platform of the Internet of Things, especially in the domain of smart cities. One of the primary challenges is that selecting the appropriate services or service combination for upper layer applications is hard, which is due to the absence of a unified semantical service description pattern, as well as the service selection mechanism. In this paper, we define a semantic service representation model from four key properties: Capability (C), Deployment (D), Resource (R) and IOData (IO). Based on this model, an agent-based middleware is built to support semantic service enablement. In this middleware, we present an efficient semantic service discovery and matching approach for a service combination process, which calculates the semantic similarity between services, and a heuristic algorithm to search the service candidates for a specific service request. Based on this design, we propose a simulation of virtual urban fire fighting, and the experimental results manifest the feasibility and efficiency of our design.

  18. Semantic Agent-Based Service Middleware and Simulation for Smart Cities

    Science.gov (United States)

    Liu, Ming; Xu, Yang; Hu, Haixiao; Mohammed, Abdul-Wahid

    2016-01-01

    With the development of Machine-to-Machine (M2M) technology, a variety of embedded and mobile devices is integrated to interact via the platform of the Internet of Things, especially in the domain of smart cities. One of the primary challenges is that selecting the appropriate services or service combination for upper layer applications is hard, which is due to the absence of a unified semantical service description pattern, as well as the service selection mechanism. In this paper, we define a semantic service representation model from four key properties: Capability (C), Deployment (D), Resource (R) and IOData (IO). Based on this model, an agent-based middleware is built to support semantic service enablement. In this middleware, we present an efficient semantic service discovery and matching approach for a service combination process, which calculates the semantic similarity between services, and a heuristic algorithm to search the service candidates for a specific service request. Based on this design, we propose a simulation of virtual urban fire fighting, and the experimental results manifest the feasibility and efficiency of our design. PMID:28009818

  19. The realization of the storage of XML and middleware-based data of electronic medical records

    International Nuclear Information System (INIS)

    Liu Shuzhen; Gu Peidi; Luo Yanlin

    2007-01-01

    In this paper, using the technology of XML and middleware to design and implement a unified electronic medical records storage archive management system and giving a common storage management model. Using XML to describe the structure of electronic medical records, transform the medical data from traditional 'business-centered' medical information into a unified 'patient-centered' XML document and using middleware technology to shield the types of the databases at different departments of the hospital and to complete the information integration of the medical data which scattered in different databases, conducive to information sharing between different hospitals. (authors)

  20. Intelligent Multi-Agent Middleware for Ubiquitous Home Networking Environments

    OpenAIRE

    Minwoo Son; Seung-Hun Lee; Dongkyoo Shin; Dongil Shin

    2008-01-01

    The next stage of the home networking environment is supposed to be ubiquitous, where each piece of material is equipped with an RFID (Radio Frequency Identification) tag. To fully support the ubiquitous environment, home networking middleware should be able to recommend home services based on a user-s interests and efficiently manage information on service usage profiles for the users. Therefore, USN (Ubiquitous Sensor Network) technology, which recognizes and manages a ...

  1. The Open Source DataTurbine Initiative: Streaming Data Middleware for Environmental Observing Systems

    Science.gov (United States)

    Fountain T.; Tilak, S.; Shin, P.; Hubbard, P.; Freudinger, L.

    2009-01-01

    The Open Source DataTurbine Initiative is an international community of scientists and engineers sharing a common interest in real-time streaming data middleware and applications. The technology base of the OSDT Initiative is the DataTurbine open source middleware. Key applications of DataTurbine include coral reef monitoring, lake monitoring and limnology, biodiversity and animal tracking, structural health monitoring and earthquake engineering, airborne environmental monitoring, and environmental sustainability. DataTurbine software emerged as a commercial product in the 1990 s from collaborations between NASA and private industry. In October 2007, a grant from the USA National Science Foundation (NSF) Office of Cyberinfrastructure allowed us to transition DataTurbine from a proprietary software product into an open source software initiative. This paper describes the DataTurbine software and highlights key applications in environmental monitoring.

  2. Exploring virtualisation tools with a new virtualisation provisioning method to test dynamic grid environments for ALICE grid jobs over ARC grid middleware

    International Nuclear Information System (INIS)

    Wagner, B; Kileng, B

    2014-01-01

    The Nordic Tier-1 centre for LHC is distributed over several computing centres. It uses ARC as the internal computing grid middleware. ALICE uses its own grid middleware AliEn to distribute jobs and the necessary software application stack. To make use of most of the AliEn infrastructure and software deployment methods for running ALICE grid jobs on ARC, we are investigating different possible virtualisation technologies. For this a testbed and possible framework for bridging different middleware systems is under development. It allows us to test a variety of virtualisation methods and software deployment technologies in the form of different virtual machines.

  3. System on Mobile Devices Middleware: Thinking beyond Basic Phones and PDAs

    Science.gov (United States)

    Prasad, Sushil K.

    Several classes of emerging applications, spanning domains such as medical informatics, homeland security, mobile commerce, and scientific applications, are collaborative, and a significant portion of these will harness the capabilities of both the stable and mobile infrastructures (the “mobile grid”). Currently, it is possible to develop a collaborative application running on a collection of heterogeneous, possibly mobile, devices, each potentially hosting data stores, using existing middleware technologies such as JXTA, BREW, Compact .NET and J2ME. However, they require too many ad-hoc techniques as well as cumbersome and time-consuming programming. Our System on Mobile Devices (SyD) middleware, on the other hand, has a modular architecture that makes such application development very systematic and streamlined. The architecture supports transactions over mobile data stores, with a range of remote group invocation options and embedded interdependencies among such data store objects. The architecture further provides a persistent uniform object view, group transaction with Quality of Service (QoS) specifications, and XML vocabulary for inter-device communication. I will present the basic SyD concepts, introduce the architecture and the design of the SyD middleware and its components. We will discuss the basic performance figures of SyD components and a few SyD applications on PDAs. SyD platform has led to developments in distributed web service coordination and workflow technologies, which we will briefly discuss. There is a vital need to develop methodologies and systems to empower common users, such as computational scientists, for rapid development of such applications. Our BondFlow system enables rapid configuration and execution of workflows over web services. The small footprint of the system enables them to reside on Java-enabled handheld devices.

  4. E-CITY KNOWWARE: KNOWLEDGE MIDDLEWARE FOR COORDINATED MANAGEMENT OF SUSTAINABLE CITIES

    Directory of Open Access Journals (Sweden)

    Tamer E. El-Diraby

    2009-12-01

    Full Text Available The realization of e-city is a necessary component for achieving the green city. This paper outlines a vision for an e-city platform that is based on knowledge brokerage in the green city. The proposed platform will be a venue for creating dynamic virtual organizations to harness collective intelligence of knowledge hubs to analyze and manage sustainability knowledge in urban areas. Knowledge assets of participating organizations will be presented in three dimensions: process structures, human profile and software systems. These three facets of knowledge will be accessible and viewable through a self-describing mechanism. Cities can post their geospatial and real-time data on the net. Relevant environmental and energy-use data will be extracted using topic maps and data extraction services. Local decision makers can synchronize work processes (from participating hubs to create an integrated workflow for a new ad hoc virtual organization to collaboratively analyze the multifaceted nature of sustainable decision making. An e-city platform is envisioned in this paper that will be realized through intelligent, agent-like, domain-specific middleware (KnowWare. Through triangulation between people, software and processes, these KnowWare will discover, negotiate, integrate, reason and communicate knowledge (related to energy and environment from across organizations to the right person at the right time. KnowWare is fundamentally, a portal of social semantic services that resides on a cloud computing infrastructure. Knowware exploits thee main tools: 1 existing ontologies to represent knowledge in a semantic manner, 2 topic maps to profile sources of knowledge and match these to the complex needs of sustainability analysis, 3 domain-specific middleware for knowledge integration and reasoning.

  5. Analyzing and completing middleware designs for enterprise integration using coloured Petri nets

    NARCIS (Netherlands)

    Fahland, D.; Gierds, C.; Salinesi, C.; Norrie, M.C.; Pastor, O.

    2013-01-01

    Enterprise Integration Patterns allow us to design a middleware system conceptually before actually implementing it. So far, the in-depth analysis of such a design was not feasible, as these patterns are only described informally. We introduce a translation of each of these patterns into a Coloured

  6. Smart Grid communication middleware comparison distributed control comparison for the internet of things

    DEFF Research Database (Denmark)

    Petersen, Bo Søborg; Bindner, Henrik W.; Poulsen, Bjarne

    2017-01-01

    Communication between Distributed Energy Resources (DERs) is necessary to efficiently solve the intermittency issues caused by renewable energy, using DER power grid auxiliary services, primarily load shifting and shedding. The middleware used for communication determines which services are possi...

  7. Middleware-based Security for Hyperconnected Applications in Future In-Car Networks

    Directory of Open Access Journals (Sweden)

    Alexandre Bouard

    2013-12-01

    Full Text Available Today’s cars take advantage of powerful electronic platforms and provide more and more sophisticated connected services. More than just ensuring the role of a safe transportation mean, they process private information, industrial secrets, communicate with our smartphones, Internet and will soon host thirdparty applications. Their pervasive computerization makes them vulnerable to common security attacks, against which automotive technologies cannot protect. The transition toward Ethernet/IP-based on-board communication could be a first step to respond to these security and privacy issues. In this paper, we present a security framework leveraging local and distributed information flow techniques in order to secure the on-board network against internal and external untrusted components. We describe the implementation and integration of such a framework within an IP-based automotive middleware and provide its evaluation.

  8. Communications Middleware for Tactical Environments: Observations, Experiences, and Lessons Learned

    Science.gov (United States)

    2009-12-12

    posi- tion at the Engineering Department of the University of Ferrara , Italy . His research interests include distributed and mobile computing, QoS...science engineering from the Uni- versity of Padova, Italy , in 2005. She continued her studies at the University of Ferrara , where she gained a Master’s...Stefanelli, University of Ferrara Jesse Kovach, U.S. Army Research Laboratory James Hanna, U.S. Air Force Research Laboratory Communications Middleware

  9. Smart Grid communication middleware comparison distributed control comparison for the internet of things

    DEFF Research Database (Denmark)

    Petersen, Bo Søborg; Bindner, Henrik W.; Poulsen, Bjarne

    2017-01-01

    are possible by their performance, which is limited by the middleware characteristics, primarily interchangeable serialization and the Publish-Subscribe messaging pattern. The earlier paper "Smart Grid Serialization Comparison" (Petersen et al. 2017) AIDS in the choice of serialization, which has a big impact...

  10. FTT-MA: A Flexible Time-Triggered Middleware Architecture for Time Sensitive, Resource-Aware AmI Systems

    Science.gov (United States)

    Noguero, Adrián; Calvo, Isidro; Pérez, Federico; Almeida, Luis

    2013-01-01

    There is an increasing number of Ambient Intelligence (AmI) systems that are time-sensitive and resource-aware. From healthcare to building and even home/office automation, it is now common to find systems combining interactive and sensing multimedia traffic with relatively simple sensors and actuators (door locks, presence detectors, RFIDs, HVAC, information panels, etc.). Many of these are today known as Cyber-Physical Systems (CPS). Quite frequently, these systems must be capable of (1) prioritizing different traffic flows (process data, alarms, non-critical data, etc.), (2) synchronizing actions in several distributed devices and, to certain degree, (3) easing resource management (e.g., detecting faulty nodes, managing battery levels, handling overloads, etc.). This work presents FTT-MA, a high-level middleware architecture aimed at easing the design, deployment and operation of such AmI systems. FTT-MA ensures that both functional and non-functional aspects of the applications are met even during reconfiguration stages. The paper also proposes a methodology, together with a design tool, to create this kind of systems. Finally, a sample case study is presented that illustrates the use of the middleware and the methodology proposed in the paper. PMID:23669711

  11. Mobile phone middleware architecture for energy and context awareness in location-based services.

    Science.gov (United States)

    Galeana-Zapién, Hiram; Torres-Huitzil, César; Rubio-Loyola, Javier

    2014-12-10

    The disruptive innovation of smartphone technology has enabled the development of mobile sensing applications leveraged on specialized sensors embedded in the device. These novel mobile phone applications rely on advanced sensor information processes, which mainly involve raw data acquisition, feature extraction, data interpretation and transmission. However, the continuous accessing of sensing resources to acquire sensor data in smartphones is still very expensive in terms of energy, particularly due to the periodic use of power-intensive sensors, such as the Global Positioning System (GPS) receiver. The key underlying idea to design energy-efficient schemes is to control the duty cycle of the GPS receiver. However, adapting the sensing rate based on dynamic context changes through a flexible middleware has received little attention in the literature. In this paper, we propose a novel modular middleware architecture and runtime environment to directly interface with application programming interfaces (APIs) and embedded sensors in order to manage the duty cycle process based on energy and context aspects. The proposed solution has been implemented in the Android software stack. It allows continuous location tracking in a timely manner and in a transparent way to the user. It also enables the deployment of sensing policies to appropriately control the sampling rate based on both energy and perceived context. We validate the proposed solution taking into account a reference location-based service (LBS) architecture. A cloud-based storage service along with online mobility analysis tools have been used to store and access sensed data. Experimental measurements demonstrate the feasibility and efficiency of our middleware, in terms of energy and location resolution.

  12. Participatory development of a middleware for AAL solutions: requirements and approach – the case of SOPRANO

    Directory of Open Access Journals (Sweden)

    Schmidt, Andreas

    2008-10-01

    Full Text Available This paper describes the main features of a middleware for Ambient Assisted Living (AAL applications, exemplified along the SOPRANO research project. The contribution outlines main requirements towards the technical system and the elicitation methodology. The presented middleware allows for personalisation and flexible, extendible configuration of AAL solutions with low effort. Concerning the technical concept, the design approach as well as components, qualities and functionality of the AAL platform are depicted. Furthermore the methodology of requirements elicitation is discussed. It is explained how SOPRANO met the problem to elicit socio-technical system requirements in a user-centred manner, although the addressed target group is not expected to be able to express precise guidelines. SOPRANO („Service oriented programmable smart environments for older Europeans“, http://www.soprano-ip.org/ is a research project funded by the European Commission, which aims at the provision of a technical (AAL infrastructure to help elderly people to keep their independence and to stay in their familiar environment as long as possible. SOPRANO focuses on in-house support and emphasises well-being. It is a main goal to secure situation-aware assistance and help not only in case of emergencies but particularly as well in activities of daily living.

  13. A Genetic Algorithms-based Approach for Optimized Self-protection in a Pervasive Service Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Ingstrup, Mads; Hansen, Klaus Marius

    2009-01-01

    With increasingly complex and heterogeneous systems in pervasive service computing, it becomes more and more important to provide self-protected services to end users. In order to achieve self-protection, the corresponding security should be provided in an optimized manner considering...... the constraints of heterogeneous devices and networks. In this paper, we present a Genetic Algorithms-based approach for obtaining optimized security configurations at run time, supported by a set of security OWL ontologies and an event-driven framework. This approach has been realized as a prototype for self-protection...... in the Hydra middleware, and is integrated with a framework for enforcing the computed solution at run time using security obligations. The experiments with the prototype on configuring security strategies for a pervasive service middleware show that this approach has acceptable performance, and could be used...

  14. Smart TV-Smartphone Multiscreen Interactive Middleware for Public Displays.

    Science.gov (United States)

    Martinez-Pabon, Francisco; Caicedo-Guerrero, Jaime; Ibarra-Samboni, Jhon Jairo; Ramirez-Gonzalez, Gustavo; Hernández-Leo, Davinia

    2015-01-01

    A new generation of public displays demands high interactive and multiscreen features to enrich people's experience in new pervasive environments. Traditionally, research on public display interaction has involved mobile devices as the main characters during the use of personal area network technologies such as Bluetooth or NFC. However, the emergent Smart TV model arises as an interesting alternative for the implementation of a new generation of public displays. This is due to its intrinsic connection capabilities with surrounding devices like smartphones or tablets. Nonetheless, the different approaches proposed by the most important vendors are still underdeveloped to support multiscreen and interaction capabilities for modern public displays, because most of them are intended for domestic environments. This research proposes multiscreen interactive middleware for public displays, which was developed from the principles of a loosely coupled interaction model, simplicity, stability, concurrency, low latency, and the usage of open standards and technologies. Moreover, a validation prototype is proposed in one of the most interesting public display scenarios: the advertising.

  15. Running CMS remote analysis builder jobs on advanced resource connector middleware

    International Nuclear Information System (INIS)

    Edelmann, E; Happonen, K; Koivumäki, J; Lindén, T; Välimaa, J

    2011-01-01

    CMS user analysis jobs are distributed over the grid with the CMS Remote Analysis Builder application (CRAB). According to the CMS computing model the applications should run transparently on the different grid flavours in use. In CRAB this is handled with different plugins that are able to submit to different grids. Recently a CRAB plugin for submitting to the Advanced Resource Connector (ARC) middleware has been developed. The CRAB ARC plugin enables simple and fast job submission with full job status information available. CRAB can be used with a server which manages and monitors the grid jobs on behalf of the user. In the presentation we will report on the CRAB ARC plugin and on the status of integrating it with the CRAB server and compare this with using the gLite ARC interoperability method for job submission.

  16. Adaptive data management in the ARC Grid middleware

    International Nuclear Information System (INIS)

    Cameron, D; Karpenko, D; Konstantinov, A; Gholami, A

    2011-01-01

    The Advanced Resource Connector (ARC) Grid middleware was designed almost 10 years ago, and has proven to be an attractive distributed computing solution and successful in adapting to new data management and storage technologies. However, with an ever-increasing user base and scale of resources to manage, along with the introduction of more advanced data transfer protocols, some limitations in the current architecture have become apparent. The simple first-in first-out approach to data transfer leads to bottlenecks in the system, as does the built-in assumption that all data is immediately available from remote data storage. We present an entirely new data management architecture for ARC which aims to alleviate these problems, by introducing a three-layer structure. The top layer accepts incoming requests for data transfer and directs them to the middle layer, which schedules individual transfers and negotiates with various intermediate catalog and storage systems until the physical file is ready to be transferred. The lower layer performs all operations which use large amounts of bandwidth, i.e. the physical data transfer. Using such a layered structure allows more efficient use of the available bandwidth as well as enabling late-binding of jobs to data transfer slots based on a priority system. Here we describe in full detail the design and implementation of the new system.

  17. Mobile Phone Middleware Architecture for Energy and Context Awareness in Location-Based Services

    Science.gov (United States)

    Galeana-Zapién, Hiram; Torres-Huitzil, César; Rubio-Loyola, Javier

    2014-01-01

    The disruptive innovation of smartphone technology has enabled the development of mobile sensing applications leveraged on specialized sensors embedded in the device. These novel mobile phone applications rely on advanced sensor information processes, which mainly involve raw data acquisition, feature extraction, data interpretation and transmission. However, the continuous accessing of sensing resources to acquire sensor data in smartphones is still very expensive in terms of energy, particularly due to the periodic use of power-intensive sensors, such as the Global Positioning System (GPS) receiver. The key underlying idea to design energy-efficient schemes is to control the duty cycle of the GPS receiver. However, adapting the sensing rate based on dynamic context changes through a flexible middleware has received little attention in the literature. In this paper, we propose a novel modular middleware architecture and runtime environment to directly interface with application programming interfaces (APIs) and embedded sensors in order to manage the duty cycle process based on energy and context aspects. The proposed solution has been implemented in the Android software stack. It allows continuous location tracking in a timely manner and in a transparent way to the user. It also enables the deployment of sensing policies to appropriately control the sampling rate based on both energy and perceived context. We validate the proposed solution taking into account a reference location-based service (LBS) architecture. A cloud-based storage service along with online mobility analysis tools have been used to store and access sensed data. Experimental measurements demonstrate the feasibility and efficiency of our middleware, in terms of energy and location resolution. PMID:25513821

  18. Interaction systems design and the protocol- and middleware-centred paradigms in distributed application development

    NARCIS (Netherlands)

    Andrade Almeida, João; van Sinderen, Marten J.; Quartel, Dick; Ferreira Pires, Luis

    2003-01-01

    This paper aims at demonstrating the benefits and importance of interaction systems design in the development of distributed applications. We position interaction systems design with respect to two paradigms that have influenced the design of distributed applications: the middleware-centred and the

  19. Using CREAM and CEMonitor for job submission and management in the gLite middleware

    Energy Technology Data Exchange (ETDEWEB)

    Aiftimiei, C; Andreetto, P; Bertocco, S; Dalla Fina, S; Dorigo, A; Frizziero, E; Gianelle, A; Mazzucato, M; Sgaravatto, M; Traldi, S; Zangrando, L [INFN Padova, Via Marzolo 8, I-35131 Padova (Italy); Marzolla, M [Dipartimento di Scienze dell' Informazione, Universita di Bologna, Mura A. Zamboni 7, I-40127 Bologna (Italy); Lorenzo, P Mendez; Miccio, V [CERN, BAT. 28-1-019, 1211 Geneve (Switzerland)

    2010-04-01

    In this paper we describe the use of CREAM and CEMonitor services for job submission and management within the gLite Grid middleware. Both CREAM and CEMonitor address one of the most fundamental operations of a Grid middleware, that is job submission and management. Specifically, CREAM is a job management service used for submitting, managing and monitoring computational jobs. CEMonitor is an event notification framework, which can be coupled with CREAM to provide the users with asynchronous job status change notifications. Both components have been integrated in the gLite Workload Management System by means of ICE (Interface to CREAM Environment). These software components have been released for production in the EGEE Grid infrastructure and, for what concerns the CEMonitor service, also in the OSG Grid. In this paper we report the current status of these services, the achieved results, and the issues that still have to be addressed.

  20. Using CREAM and CEMonitor for job submission and management in the gLite middleware

    International Nuclear Information System (INIS)

    Aiftimiei, C; Andreetto, P; Bertocco, S; Dalla Fina, S; Dorigo, A; Frizziero, E; Gianelle, A; Mazzucato, M; Sgaravatto, M; Traldi, S; Zangrando, L; Marzolla, M; Lorenzo, P Mendez; Miccio, V

    2010-01-01

    In this paper we describe the use of CREAM and CEMonitor services for job submission and management within the gLite Grid middleware. Both CREAM and CEMonitor address one of the most fundamental operations of a Grid middleware, that is job submission and management. Specifically, CREAM is a job management service used for submitting, managing and monitoring computational jobs. CEMonitor is an event notification framework, which can be coupled with CREAM to provide the users with asynchronous job status change notifications. Both components have been integrated in the gLite Workload Management System by means of ICE (Interface to CREAM Environment). These software components have been released for production in the EGEE Grid infrastructure and, for what concerns the CEMonitor service, also in the OSG Grid. In this paper we report the current status of these services, the achieved results, and the issues that still have to be addressed.

  1. Combining a Multi-Agent System and Communication Middleware for Smart Home Control: A Universal Control Platform Architecture

    Directory of Open Access Journals (Sweden)

    Song Zheng

    2017-09-01

    Full Text Available In recent years, the smart home field has gained wide attention for its broad application prospects. However, families using smart home systems must usually adopt various heterogeneous smart devices, including sensors and devices, which makes it more difficult to manage and control their home system. How to design a unified control platform to deal with the collaborative control problem of heterogeneous smart devices is one of the greatest challenges in the current smart home field. The main contribution of this paper is to propose a universal smart home control platform architecture (IAPhome based on a multi-agent system and communication middleware, which shows significant adaptability and advantages in many aspects, including heterogeneous devices connectivity, collaborative control, human-computer interaction and user self-management. The communication middleware is an important foundation to design and implement this architecture which makes it possible to integrate heterogeneous smart devices in a flexible way. A concrete method of applying the multi-agent software technique to solve the integrated control problem of the smart home system is also presented. The proposed platform architecture has been tested in a real smart home environment, and the results indicate that the effectiveness of our approach for solving the collaborative control problem of different smart devices.

  2. Remote control of data acquisition devices by means of message oriented middleware

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A.; Pereira, A.; Vega, J.; Kirpitchev, I.

    2007-01-01

    The TJ-II autonomous acquisition systems are computers running dedicated applications for programming and controlling data acquisition channels and also integrating acquired data into the central database. These computers are located in the experimental hall and have to be remotely controlled during plasma discharges. A remote control for these systems has been implemented by taking advantage of the message-oriented middleware recently introduced into the TJ-II data acquisition system. Java Message Service (JMS) is used as the messaging application program interface. All the acquisition actions that are available through the system console of the data acquisition computers (starting or aborting an acquisition, restarting the system or updating the acquisition application) can now be initiated remotely. Command messages are sent to the acquisition systems located in the experimental hall close to the TJ-II device by using the messaging software, without having to use a remote desktop application that produces heavy network traffic and requires manual operation. Action commands can be sent to only one or to several/many acquisition systems at the same time. This software is integrated into the TJ-II remote participation system and the acquisition systems can be commanded from inside or outside the laboratory. All this software is integrated into the security framework provided by PAPI, thus preventing non-authorized users commanding the acquisition computers. In order to dimension and distribute messaging services some performance tests of the message oriented middleware software have been carried out. Results of the tests are presented. As suggested by the tests results different transport connectors are used: TCP transport protocol is used for the local environment, while HTTP protocol is used for remote accesses, thereby allowing the system performance to be optimized

  3. Remote control of data acquisition devices by means of message oriented middleware

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, E. [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain)], E-mail: edi.sanchez@ciemat.es; Portas, A.; Pereira, A.; Vega, J.; Kirpitchev, I. [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain)

    2007-10-15

    The TJ-II autonomous acquisition systems are computers running dedicated applications for programming and controlling data acquisition channels and also integrating acquired data into the central database. These computers are located in the experimental hall and have to be remotely controlled during plasma discharges. A remote control for these systems has been implemented by taking advantage of the message-oriented middleware recently introduced into the TJ-II data acquisition system. Java Message Service (JMS) is used as the messaging application program interface. All the acquisition actions that are available through the system console of the data acquisition computers (starting or aborting an acquisition, restarting the system or updating the acquisition application) can now be initiated remotely. Command messages are sent to the acquisition systems located in the experimental hall close to the TJ-II device by using the messaging software, without having to use a remote desktop application that produces heavy network traffic and requires manual operation. Action commands can be sent to only one or to several/many acquisition systems at the same time. This software is integrated into the TJ-II remote participation system and the acquisition systems can be commanded from inside or outside the laboratory. All this software is integrated into the security framework provided by PAPI, thus preventing non-authorized users commanding the acquisition computers. In order to dimension and distribute messaging services some performance tests of the message oriented middleware software have been carried out. Results of the tests are presented. As suggested by the tests results different transport connectors are used: TCP transport protocol is used for the local environment, while HTTP protocol is used for remote accesses, thereby allowing the system performance to be optimized.

  4. Smart grid communication comparison: Distributed control middleware and serialization comparison for the Internet of Things

    DEFF Research Database (Denmark)

    Petersen, Bo Søborg; Bindner, Henrik W.; Poulsen, Bjarne

    2017-01-01

    To solve the problems caused by intermittent renewable energy production, communication between Distributed Energy Resources (DERs) and system operators is necessary. The communication middleware and serialization used for communication are essential to ensure delivery of the messages within the ...

  5. Smart TV-Smartphone Multiscreen Interactive Middleware for Public Displays

    Science.gov (United States)

    Martinez-Pabon, Francisco; Caicedo-Guerrero, Jaime; Ibarra-Samboni, Jhon Jairo; Ramirez-Gonzalez, Gustavo; Hernández-Leo, Davinia

    2015-01-01

    A new generation of public displays demands high interactive and multiscreen features to enrich people's experience in new pervasive environments. Traditionally, research on public display interaction has involved mobile devices as the main characters during the use of personal area network technologies such as Bluetooth or NFC. However, the emergent Smart TV model arises as an interesting alternative for the implementation of a new generation of public displays. This is due to its intrinsic connection capabilities with surrounding devices like smartphones or tablets. Nonetheless, the different approaches proposed by the most important vendors are still underdeveloped to support multiscreen and interaction capabilities for modern public displays, because most of them are intended for domestic environments. This research proposes multiscreen interactive middleware for public displays, which was developed from the principles of a loosely coupled interaction model, simplicity, stability, concurrency, low latency, and the usage of open standards and technologies. Moreover, a validation prototype is proposed in one of the most interesting public display scenarios: the advertising. PMID:25950018

  6. Definition and implementation of a SAML-XACML profile for authorization interoperability across grid middleware in OSG and EGEE

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, Gabriele; Alderman, Ian; Altunay, Mine; Anathakrishnan, Rachana; Bester, Joe; Chadwick, Keith; Ciaschini, Vincenzo; Demchenko, Yuri; Ferraro, Andrea; Forti, Alberto; Groep, David; /Fermilab /NIKHEF, Amsterdam /Brookhaven /Amsterdam U. /SWITCH, Zurich /Bergen U. /INFN, CNAF /Argonne /Wisconsin U., Madison

    2009-04-01

    In order to ensure interoperability between middleware and authorization infrastructures used in the Open Science Grid (OSG) and the Enabling Grids for E-sciencE (EGEE) projects, an Authorization Interoperability activity was initiated in 2006. The interoperability goal was met in two phases: first, agreeing on a common authorization query interface and protocol with an associated profile that ensures standardized use of attributes and obligations; and second, implementing, testing, and deploying, on OSG and EGEE, middleware that supports the interoperability protocol and profile. The activity has involved people from OSG, EGEE, the Globus Toolkit project, and the Condor project. This paper presents a summary of the agreed-upon protocol, profile and the software components involved.

  7. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  8. Standardized Access and Processing of Multi-Source Earth Observation Time-Series Data within a Regional Data Middleware

    Science.gov (United States)

    Eberle, J.; Schmullius, C.

    2017-12-01

    Increasing archives of global satellite data present a new challenge to handle multi-source satellite data in a user-friendly way. Any user is confronted with different data formats and data access services. In addition the handling of time-series data is complex as an automated processing and execution of data processing steps is needed to supply the user with the desired product for a specific area of interest. In order to simplify the access to data archives of various satellite missions and to facilitate the subsequent processing, a regional data and processing middleware has been developed. The aim of this system is to provide standardized and web-based interfaces to multi-source time-series data for individual regions on Earth. For further use and analysis uniform data formats and data access services are provided. Interfaces to data archives of the sensor MODIS (NASA) as well as the satellites Landsat (USGS) and Sentinel (ESA) have been integrated in the middleware. Various scientific algorithms, such as the calculation of trends and breakpoints of time-series data, can be carried out on the preprocessed data on the basis of uniform data management. Jupyter Notebooks are linked to the data and further processing can be conducted directly on the server using Python and the statistical language R. In addition to accessing EO data, the middleware is also used as an intermediary between the user and external databases (e.g., Flickr, YouTube). Standardized web services as specified by OGC are provided for all tools of the middleware. Currently, the use of cloud services is being researched to bring algorithms to the data. As a thematic example, an operational monitoring of vegetation phenology is being implemented on the basis of various optical satellite data and validation data from the German Weather Service. Other examples demonstrate the monitoring of wetlands focusing on automated discovery and access of Landsat and Sentinel data for local areas.

  9. An XML based middleware for ECG format conversion.

    Science.gov (United States)

    Li, Xuchen; Vojisavljevic, Vuk; Fang, Qiang

    2009-01-01

    With the rapid development of information and communication technologies, various e-health solutions have been proposed. The digitized medical images as well as the mono-dimension medical signals are two major forms of medical information that are stored and manipulated within an electronic medical environment. Though a variety of industrial and international standards such as DICOM and HL7 have been proposed, many proprietary formats are still pervasively used by many Hospital Information System (HIS) and Picture Archiving and Communication System (PACS) vendors. Those proprietary formats are the big hurdle to form a nationwide or even worldwide e-health network. Thus there is an imperative need to solve the medical data integration problem. Moreover, many small clinics, many hospitals in developing countries and some regional hospitals in developed countries, which have limited budget, have been shunned from embracing the latest medical information technologies due to their high costs. In this paper, we propose an XML based middleware which acts as a translation engine to seamlessly integrate clinical ECG data from a variety of proprietary data formats. Furthermore, this ECG translation engine is designed in a way that it can be integrated into an existing PACS to provide a low cost medical information integration and storage solution.

  10. OpenBAN: An Open Building ANalytics Middleware for Smart Buildings

    Directory of Open Access Journals (Sweden)

    Pandarasamy Arjunan

    2016-03-01

    Full Text Available Towards the realization of smart building applications, buildings are increasingly instrumented with diverse sensors and actuators. These sensors generate large volumes of data which can be analyzed for optimizing building operations. Many building energy management tasks such as energy forecasting, disaggregation, among others require complex analytics leveraging collected sensor data. While several standalone and cloud-based systems for archiving, sharing and visualizing sensor data have emerged, their support for analyzing sensor data streams is primitive and limited to rule-based actions based on thresholds and simple aggregation functions. We develop OpenBAN, an open source sensor data analytics middleware for buildings, to make analytics an integral component of modern smart building applications. OpenBAN provides a framework of extensible sensor data processing elements for identifying various building context, which different applications can leverage. We validate the capabilities of OpenBAN by developing three representative real-world applications which are deployed in our test-bed buildings: (i household energy disaggregation, (ii detection of sprinkler usage from water meter data, and (iii electricity demand forecasting. We also provide a preliminary system performance of OpenBAN when deployed in the cloud and locally.

  11. Interaction systems design and the protocol- and middleware-centred paradigms in distributed application development

    OpenAIRE

    Andrade Almeida, João; van Sinderen, Marten J.; Quartel, Dick; Ferreira Pires, Luis

    2003-01-01

    This paper aims at demonstrating the benefits and importance of interaction systems design in the development of distributed applications. We position interaction systems design with respect to two paradigms that have influenced the design of distributed applications: the middleware-centred and the protocol-centred paradigm. We argue that interaction systems that support application-level interactions should be explicitly designed, using the externally observable behaviour of the interaction ...

  12. Middleware Case Study: MeDICi

    Energy Technology Data Exchange (ETDEWEB)

    Wynne, Adam S.

    2011-05-05

    ) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.

  13. Verification of Generic Ubiquitous Middleware for Smart Home Using Coloured Petri Nets

    OpenAIRE

    Madhusudanan. J; Anand. P; Hariharan. S; V. Prasanna Venkatesan

    2014-01-01

    Smart home is a relatively new technology, where we applied pervasive computing in all the aspects, so as to make our jobs or things that we normally do in-side the home in a very easier way. Originally, a smart home technology was used to control environmental systems such as lighting and heating; but recently the use of smart technology has been developed so that almost any electrical component within the home can be included in the system. Usually in pervasive computing, a middleware is de...

  14. STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Geoffrey [Indiana Univ., Bloomington, IN (United States); Jha, Shantenu [Rutgers Univ., New Brunswick, NJ (United States); Ramakrishnan, Lavanya [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-10-01

    The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), were conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report

  15. Authorization & security aspects in the middleware-based healthcare information system.

    Science.gov (United States)

    Andany, J; Bjorkendal, C; Ferrara, F M; Scherrer, J R; Spahni, S

    1999-01-01

    The integration and evolution of existing systems represents one of the most urgent priorities of health care information systems in order to allow the whole organisation to meet the increasing clinical organisational and managerial needs. The CEN ENV 12967-1 'Healthcare Information Systems Architecture'(HISA) standard defines an architectural approach based on a middleware of business-specific common services, enabling all parts of the local and geographical system to operate on the common information heritage of the organisation and on exploiting a set of common business-oriented functionality. After an overview on the key aspects of HISA, this paper discusses the positioning of the authorization and security aspects in the overall architecture. A global security framework is finally proposed.

  16. A DVE Time Management Simulation and Verification Platform Based on Causality Consistency Middleware

    Science.gov (United States)

    Zhou, Hangjun; Zhang, Wei; Peng, Yuxing; Li, Sikun

    During the course of designing a time management algorithm for DVEs, the researchers always become inefficiency for the distraction from the realization of the trivial and fundamental details of simulation and verification. Therefore, a platform having realized theses details is desirable. However, this has not been achieved in any published work to our knowledge. In this paper, we are the first to design and realize a DVE time management simulation and verification platform providing exactly the same interfaces as those defined by the HLA Interface Specification. Moreover, our platform is based on a new designed causality consistency middleware and might offer the comparison of three kinds of time management services: CO, RO and TSO. The experimental results show that the implementation of the platform only costs small overhead, and that the efficient performance of it is highly effective for the researchers to merely focus on the improvement of designing algorithms.

  17. Cross-Platform Android/iOS-Based Smart Switch Control Middleware in a Digital Home

    Directory of Open Access Journals (Sweden)

    Guo Jie

    2015-01-01

    Full Text Available With technological and economic development, people’s lives have been improved substantially, especially their home environments. One of the key aspects of these improvements is home intellectualization, whose core is the smart home control system. Furthermore, as smart phones have become increasingly popular, we can use them to control the home system through Wi-Fi, Bluetooth, and GSM. This means that control with phones is more convenient and fast and now becomes the primary terminal controller in the smart home. In this paper, we propose middleware for developing a cross-platform Android/iOS-based solution for smart switch control software, focus on the Wi-Fi based communication protocols between the cellphone and the smart switch, achieved a plugin-based smart switch function, defined and implemented the JavaScript interface, and then implemented the cross-platform Android/iOS-based smart switch control software; also the scenarios are illustrated. Finally, tests were performed after the completed realization of the smart switch control system.

  18. Integrating Distributed Interactive Simulations With the Project Darkstar Open-Source Massively Multiplayer Online Game (MMOG) Middleware

    Science.gov (United States)

    2009-09-01

    be complete MMOG solutions such as Multiverse are not within the scope of this thesis, though it is recommended that readers compare this type of...software to the middleware described here ( Multiverse , 2009). 1. University of Munster: Real-Time Framework The Real-Time Framework (RTF) project is...10, 2009, from http://wiki.secondlife.com/wiki/MMOX Multiverse . (2009). Multiverse platform architecture. Retrieved September 9, 2009, from http

  19. Semantic Web based Self-management for a Pervasive Service Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius

    2008-01-01

    Self-management is one of the challenges for realizing ambient intelligence in pervasive computing. In this paper,we propose and present a semantic Web based self-management approach for a pervasive service middleware where dynamic context information is encoded in a set of self-management context...... ontologies. The proposed approach is justified from the characteristics of pervasive computing and the open world assumption and reasoning potentials of semantic Web and its rule language. To enable real-time self-management, application level and network level state reporting is employed in our approach....... State changes are triggering execution of self-management rules for adaption, monitoring, diagnosis, and so on. Evaluations of self-diagnosis in terms of extensibility, performance,and scalability show that the semantic Web based self-management approach is effective to achieve the self-diagnosis goals...

  20. A Gossip-Based Optimistic Replication for Efficient Delay-Sensitive Streaming Using an Interactive Middleware Support System

    Science.gov (United States)

    Mavromoustakis, Constandinos X.; Karatza, Helen D.

    2010-06-01

    While sharing resources the efficiency is substantially degraded as a result of the scarceness of availability of the requested resources in a multiclient support manner. These resources are often aggravated by many factors like the temporal constraints for availability or node flooding by the requested replicated file chunks. Thus replicated file chunks should be efficiently disseminated in order to enable resource availability on-demand by the mobile users. This work considers a cross layered middleware support system for efficient delay-sensitive streaming by using each device's connectivity and social interactions in a cross layered manner. The collaborative streaming is achieved through the epidemically replicated file chunk policy which uses a transition-based approach of a chained model of an infectious disease with susceptible, infected, recovered and death states. The Gossip-based stateful model enforces the mobile nodes whether to host a file chunk or not or, when no longer a chunk is needed, to purge it. The proposed model is thoroughly evaluated through experimental simulation taking measures for the effective throughput Eff as a function of the packet loss parameter in contrast with the effectiveness of the replication Gossip-based policy.

  1. The Next Generation ARC Middleware and ATLAS Computing Model

    CERN Document Server

    Filipcic, A; The ATLAS collaboration; Smirnova, O; Konstantinov, A; Karpenko, D

    2012-01-01

    The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS' global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new ...

  2. Experiences with Software Quality Metrics in the EMI Middleware

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project t...

  3. A Semantic Middleware Architecture Focused on Data and Heterogeneity Management within the Smart Grid

    Directory of Open Access Journals (Sweden)

    Rubén de Diego

    2014-09-01

    Full Text Available There is an increasing tendency of turning the current power grid, essentially unaware of variations in electricity demand and scattered energy sources, into something capable of bringing a degree of intelligence by using tools strongly related to information and communication technologies, thus turning into the so-called Smart Grid. In fact, it could be considered that the Smart Grid is an extensive smart system that spreads throughout any area where power is required, providing a significant optimization in energy generation, storage and consumption. However, the information that must be treated to accomplish these tasks is challenging both in terms of complexity (semantic features, distributed systems, suitable hardware and quantity (consumption data, generation data, forecasting functionalities, service reporting, since the different energy beneficiaries are prone to be heterogeneous, as the nature of their own activities is. This paper presents a proposal on how to deal with these issues by using a semantic middleware architecture that integrates different components focused on specific tasks, and how it is used to handle information at every level and satisfy end user requests.

  4. A Middleware Solution for Wireless IoT Applications in Sparse Smart Cities

    Directory of Open Access Journals (Sweden)

    Paolo Bellavista

    2017-11-01

    Full Text Available The spread of off-the-shelf mobile devices equipped with multiple wireless interfaces together with sophisticated sensors is paving the way to novel wireless Internet of Things (IoT environments, characterized by multi-hop infrastructure-less wireless networks where devices carried by users act as sensors/actuators as well as network nodes. In particular, the paper presents Real Ad-hoc Multi-hop Peer-to peer-Wireless IoT Application (RAMP-WIA, a novel solution that facilitates the development, deployment, and management of applications in sparse Smart City environments, characterized by users willing to collaborate by allowing new applications to be deployed on their smartphones to remotely monitor and control fixed/mobile devices. RAMP-WIA allows users to dynamically configure single-hop wireless links, to manage opportunistically multi-hop packet dispatching considering that the network topology (together with the availability of sensors and actuators may abruptly change, to actuate reliably sensor nodes specifically considering that only part of them could be actually reachable in a timely manner, and to upgrade dynamically the nodes through over-the-air distribution of new software components. The paper also reports the performance of RAMP-WIA on simple but realistic cases of small-scale deployment scenarios with off-the-shelf Android smartphones and Raspberry Pi devices; these results show not only the feasibility and soundness of the proposed approach, but also the efficiency of the middleware implemented when deployed on real testbeds.

  5. The new inter process communication middle-ware for the ATLAS Trigger and Data Acquisition system

    CERN Document Server

    Kolos, Serguei; The ATLAS collaboration

    2016-01-01

    The ATLAS Trigger & Data Acquisition (TDAQ) project was started almost twenty years ago with the aim of providing scalable distributed data collection system for the experiment. While the software dealing with physics data flow was implemented by directly using the low-level communication protocols, like TCP and UDP, the control and monitoring infrastructure services for the TDAQ system were implemented on top of the CORBA communication middle-ware. CORBA provides a high-level object oriented abstraction for the inter process communication, hiding communication complexity from the developers. This approach speeds up and simplifies development of communication services but incurs some extra cost in terms of performance and resources overhead. Our experience of using CORBA for control and monitoring data exchange in the distributed TDAQ system was very successful, mostly due to the outstanding quality of the CORBA brokers, which have been used in the project: omniORB for C++ and JacORB for Java. However, du...

  6. Combining a Multi-Agent System and Communication Middleware for Smart Home Control: A Universal Control Platform Architecture.

    Science.gov (United States)

    Zheng, Song; Zhang, Qi; Zheng, Rong; Huang, Bi-Qin; Song, Yi-Lin; Chen, Xin-Chu

    2017-09-16

    In recent years, the smart home field has gained wide attention for its broad application prospects. However, families using smart home systems must usually adopt various heterogeneous smart devices, including sensors and devices, which makes it more difficult to manage and control their home system. How to design a unified control platform to deal with the collaborative control problem of heterogeneous smart devices is one of the greatest challenges in the current smart home field. The main contribution of this paper is to propose a universal smart home control platform architecture (IAPhome) based on a multi-agent system and communication middleware, which shows significant adaptability and advantages in many aspects, including heterogeneous devices connectivity, collaborative control, human-computer interaction and user self-management. The communication middleware is an important foundation to design and implement this architecture which makes it possible to integrate heterogeneous smart devices in a flexible way. A concrete method of applying the multi-agent software technique to solve the integrated control problem of the smart home system is also presented. The proposed platform architecture has been tested in a real smart home environment, and the results indicate that the effectiveness of our approach for solving the collaborative control problem of different smart devices.

  7. Combining a Multi-Agent System and Communication Middleware for Smart Home Control: A Universal Control Platform Architecture

    Science.gov (United States)

    Zheng, Song; Zhang, Qi; Zheng, Rong; Huang, Bi-Qin; Song, Yi-Lin; Chen, Xin-Chu

    2017-01-01

    In recent years, the smart home field has gained wide attention for its broad application prospects. However, families using smart home systems must usually adopt various heterogeneous smart devices, including sensors and devices, which makes it more difficult to manage and control their home system. How to design a unified control platform to deal with the collaborative control problem of heterogeneous smart devices is one of the greatest challenges in the current smart home field. The main contribution of this paper is to propose a universal smart home control platform architecture (IAPhome) based on a multi-agent system and communication middleware, which shows significant adaptability and advantages in many aspects, including heterogeneous devices connectivity, collaborative control, human-computer interaction and user self-management. The communication middleware is an important foundation to design and implement this architecture which makes it possible to integrate heterogeneous smart devices in a flexible way. A concrete method of applying the multi-agent software technique to solve the integrated control problem of the smart home system is also presented. The proposed platform architecture has been tested in a real smart home environment, and the results indicate that the effectiveness of our approach for solving the collaborative control problem of different smart devices. PMID:28926957

  8. Bio-inspired Autonomic Structures: a middleware for Telecommunications Ecosystems

    Science.gov (United States)

    Manzalini, Antonio; Minerva, Roberto; Moiso, Corrado

    Today, people are making use of several devices for communications, for accessing multi-media content services, for data/information retrieving, for processing, computing, etc.: examples are laptops, PDAs, mobile phones, digital cameras, mp3 players, smart cards and smart appliances. One of the most attracting service scenarios for future Telecommunications and Internet is the one where people will be able to browse any object in the environment they live: communications, sensing and processing of data and services will be highly pervasive. In this vision, people, machines, artifacts and the surrounding space will create a kind of computational environment and, at the same time, the interfaces to the network resources. A challenging technological issue will be interconnection and management of heterogeneous systems and a huge amount of small devices tied together in networks of networks. Moreover, future network and service infrastructures should be able to provide Users and Application Developers (at different levels, e.g., residential Users but also SMEs, LEs, ASPs/Web2.0 Service roviders, ISPs, Content Providers, etc.) with the most appropriate "environment" according to their context and specific needs. Operators must be ready to manage such level of complication enabling their latforms with technological advanced allowing network and services self-supervision and self-adaptation capabilities. Autonomic software solutions, enhanced with innovative bio-inspired mechanisms and algorithms, are promising areas of long term research to face such challenges. This chapter proposes a bio-inspired autonomic middleware capable of leveraging the assets of the underlying network infrastructure whilst, at the same time, supporting the development of future Telecommunications and Internet Ecosystems.

  9. Communication tools between Grid virtual organisations, middleware deployers and sites

    CERN Document Server

    Dimou, Maria

    2008-01-01

    Grid Deployment suffers today from the difficulty to reach users and site administrators when a package or a configuration parameter changes. Release notes, twiki pages and news’ broadcasts are not efficient enough. The interest of using GGUS as an efficient and effective intra-project communication tool is the message to the user community presented here. The purpose of GGUS is to bring together End Users and Supporters in the Regions where the Grid is deployed and in operation. Today’s Grid usage is still very far from the simplicity and functionality of the web. While pressing for middleware usability, we try to turn the Global Grid User Support (GGUS) into the central tool for identifying areas in the support environment that need attention. To do this, we exploit GGUS' capacity to expand, by including new Support Units that follow the project's operational structure. Using tailored GGUS database searches we obtain concrete results that prove where we need to improve procedures, Service Level Agreemen...

  10. Interactive access to LP DAAC satellite data archives through a combination of open-source and custom middleware web services

    Science.gov (United States)

    Davis, Brian N.; Werpy, Jason; Friesz, Aaron M.; Impecoven, Kevin; Quenzer, Robert; Maiersperger, Tom; Meyer, David J.

    2015-01-01

    Current methods of searching for and retrieving data from satellite land remote sensing archives do not allow for interactive information extraction. Instead, Earth science data users are required to download files over low-bandwidth networks to local workstations and process data before science questions can be addressed. New methods of extracting information from data archives need to become more interactive to meet user demands for deriving increasingly complex information from rapidly expanding archives. Moving the tools required for processing data to computer systems of data providers, and away from systems of the data consumer, can improve turnaround times for data processing workflows. The implementation of middleware services was used to provide interactive access to archive data. The goal of this middleware services development is to enable Earth science data users to access remote sensing archives for immediate answers to science questions instead of links to large volumes of data to download and process. Exposing data and metadata to web-based services enables machine-driven queries and data interaction. Also, product quality information can be integrated to enable additional filtering and sub-setting. Only the reduced content required to complete an analysis is then transferred to the user.

  11. Proxmox high availability

    CERN Document Server

    Cheng, Simon MC

    2014-01-01

    If you want to know the secrets of virtualization and how to implement high availability on your services, this is the book for you. For those of you who are already using Proxmox, this book offers you the chance to build a high availability cluster with a distributed filesystem to further protect your system from failure.

  12. QoS-aware self-adaptation of communication protocols in a pervasive service middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius; Fernandes, João

    2010-01-01

    Pervasive computing is characterized by heterogeneous devices that usually have scarce resources requiring optimized usage. These devices may use different communication protocols which can be switched at runtime. As different communication protocols have different quality of service (Qo......S) properties, this motivates optimized self-adaption of protocols for devices, e.g., considering power consumption and other QoS requirements, e.g. round trip time (RTT) for service invocations, throughput, and reliability. In this paper, we present an extensible approach for self-adaptation of communication...... protocols for pervasive web services, where protocols are designed as reusable connectors and our middleware infrastructure can hide the complexity of using different communication protocols to upper layers. We also propose to use Genetic Algorithms (GAs) to find optimized configurations at runtime...

  13. Internet of Things based on smart objects technology, middleware and applications

    CERN Document Server

    Trunfio, Paolo

    2014-01-01

    The Internet of Things (IoT) usually refers to a world-wide network of interconnected heterogeneous objects (sensors, actuators, smart devices, smart objects, RFID, embedded computers, etc) uniquely addressable, based on standard communication protocols. Beyond such a definition, it is emerging a new definition of IoT seen as a loosely coupled, decentralized system of cooperating smart objects (SOs). A SO is an autonomous, physical digital object augmented with sensing/actuating, processing, storing, and networking capabilities. SOs are able to sense/actuate, store, and interpret information created within themselves and around the neighbouring external world where they are situated, act on their own, cooperate with each other, and exchange information with other kinds of electronic devices and human users. However, such SO-oriented IoT raises many in-the-small and in-the-large issues involving SO programming, IoT system architecture/middleware and methods/methodologies for the development of SO-based applica...

  14. Scientific component framework for W7-X using service oriented GRID middleware

    International Nuclear Information System (INIS)

    Werner, A.; Svensson, J.; Kuehner, G.; Bluhm, T.; Heimann, P.; Jakob, S.; Hennig, C.; Kroiss, H.; Laqua, H.; Lewerentz, M.; Riemann, H.; Schacht, J.; Spring, A.; Zilker, M.; Maier, J.

    2010-01-01

    Future fusion experiments, aiming to demonstrate steady state reactor operation, require physics driven plasma control based on increasingly complex plasma models. A precondition for establishing such control systems is widely automated data analysis, which can provide integration of multiple diagnostic on a large scale. Even high quality online data evaluation, which is essential for the scientific documentation of the experiment, has to be performed automatically due to the huge data sets being recorded in long discharge runs. An automated system that can handle these requirements will have to be built on reusable software components that can be maintained by the domain experts: diagnosticians, theorists, engineers and others. For Wendelstein 7-X a service oriented architecture seems to be appropriate, in which software components can be exposed as services with well defined interface contracts. Although grid computing has up to now been mainly used for remote job execution, a more promising service oriented middleware has emerged from the recent grid specification, the open grid service architecture (OGSA). It is based on stateful web services defined by the web service resource framework (WSRF) standard. In particular, the statefulness of services allows to setup complex models without unnecessary performance losses by frequent transmission of large and complex data sets. At present, the usability of this technology in the W7-X CoDaC context is under evaluation by first service implementations.

  15. A Middleware Solution for Wireless IoT Applications in Sparse Smart Cities

    Science.gov (United States)

    Lanzone, Stefano; Riberto, Giulio; Stefanelli, Cesare; Tortonesi, Mauro

    2017-01-01

    The spread of off-the-shelf mobile devices equipped with multiple wireless interfaces together with sophisticated sensors is paving the way to novel wireless Internet of Things (IoT) environments, characterized by multi-hop infrastructure-less wireless networks where devices carried by users act as sensors/actuators as well as network nodes. In particular, the paper presents Real Ad-hoc Multi-hop Peer-to peer-Wireless IoT Application (RAMP-WIA), a novel solution that facilitates the development, deployment, and management of applications in sparse Smart City environments, characterized by users willing to collaborate by allowing new applications to be deployed on their smartphones to remotely monitor and control fixed/mobile devices. RAMP-WIA allows users to dynamically configure single-hop wireless links, to manage opportunistically multi-hop packet dispatching considering that the network topology (together with the availability of sensors and actuators) may abruptly change, to actuate reliably sensor nodes specifically considering that only part of them could be actually reachable in a timely manner, and to upgrade dynamically the nodes through over-the-air distribution of new software components. The paper also reports the performance of RAMP-WIA on simple but realistic cases of small-scale deployment scenarios with off-the-shelf Android smartphones and Raspberry Pi devices; these results show not only the feasibility and soundness of the proposed approach, but also the efficiency of the middleware implemented when deployed on real testbeds. PMID:29099745

  16. A Middleware Solution for Wireless IoT Applications in Sparse Smart Cities.

    Science.gov (United States)

    Bellavista, Paolo; Giannelli, Carlo; Lanzone, Stefano; Riberto, Giulio; Stefanelli, Cesare; Tortonesi, Mauro

    2017-11-03

    The spread of off-the-shelf mobile devices equipped with multiple wireless interfaces together with sophisticated sensors is paving the way to novel wireless Internet of Things (IoT) environments, characterized by multi-hop infrastructure-less wireless networks where devices carried by users act as sensors/actuators as well as network nodes. In particular, the paper presents Real Ad-hoc Multi-hop Peer-to peer-Wireless IoT Application (RAMP-WIA), a novel solution that facilitates the development, deployment, and management of applications in sparse Smart City environments, characterized by users willing to collaborate by allowing new applications to be deployed on their smartphones to remotely monitor and control fixed/mobile devices. RAMP-WIA allows users to dynamically configure single-hop wireless links, to manage opportunistically multi-hop packet dispatching considering that the network topology (together with the availability of sensors and actuators) may abruptly change, to actuate reliably sensor nodes specifically considering that only part of them could be actually reachable in a timely manner, and to upgrade dynamically the nodes through over-the-air distribution of new software components. The paper also reports the performance of RAMP-WIA on simple but realistic cases of small-scale deployment scenarios with off-the-shelf Android smartphones and Raspberry Pi devices; these results show not only the feasibility and soundness of the proposed approach, but also the efficiency of the middleware implemented when deployed on real testbeds.

  17. JUNOS High Availability

    CERN Document Server

    Sonderegger, James; Milne, Kieran; Palislamovic, Senad

    2009-01-01

    Whether your network is a complex carrier or just a few machines supporting a small enterprise, JUNOS High Availability will help you build reliable and resilient networks that include Juniper Networks devices. With this book's valuable advice on software upgrades, scalability, remote network monitoring and management, high-availability protocols such as VRRP, and more, you'll have your network uptime at the five, six, or even seven nines -- or 99.99999% of the time. Rather than focus on "greenfield" designs, the authors explain how to intelligently modify multi-vendor networks. You'll learn

  18. Experiences with Software Quality Metrics in the EMI middleware

    International Nuclear Information System (INIS)

    Alandes, M; Meneses, D; Pucciani, G; Kenny, E M

    2012-01-01

    The EMI Quality Model has been created to define, and later review, the EMI (European Middleware Initiative) software product and process quality. A quality model is based on a set of software quality metrics and helps to set clear and measurable quality goals for software products and processes. The EMI Quality Model follows the ISO/IEC 9126 Software Engineering – Product Quality to identify a set of characteristics that need to be present in the EMI software. For each software characteristic, such as portability, maintainability, compliance, etc, a set of associated metrics and KPIs (Key Performance Indicators) are identified. This article presents how the EMI Quality Model and the EMI Metrics have been defined in the context of the software quality assurance activities carried out in EMI. It also describes the measurement plan and presents some of the metrics reports that have been produced for the EMI releases and updates. It also covers which tools and techniques can be used by any software project to extract “code metrics” on the status of the software products and “process metrics” related to the quality of the development and support process such as reaction time to critical bugs, requirements tracking and delays in product releases.

  19. High availability IT services

    CERN Document Server

    Critchley, Terry

    2014-01-01

    This book starts with the basic premise that a service is comprised of the 3Ps-products, processes, and people. Moreover, these entities and their sub-entities interlink to support the services that end users require to run and support a business. This widens the scope of any availability design far beyond hardware and software. It also increases the potential for service failure for reasons beyond just hardware and software; the concept of logical outages. High Availability IT Services details the considerations for designing and running highly available ""services"" and not just the systems

  20. Introducing high availability to non high available designed applications

    Energy Technology Data Exchange (ETDEWEB)

    Zelnicek, Pierre; Kebschull, Udo [Kirchhoff Institute of Physics, Ruprecht-Karls-University Heidelberg (Germany); Haaland, Oystein Senneset [Physic Institut, University of Bergen, Bergen (Norway); Lindenstruth, Volker [Frankfurt Institut fuer Advanced Studies, University Frankfurt (Germany)

    2010-07-01

    A common problem in scientific computing environments and compute clusters today, is how to apply high availability to legacy applications. These applications are becoming more and more a problem in increasingly complex environments and with business grade availability constraints that requires 24 x 7 x 365 hours of operation. For a majority of applications, redesign is not an option. Either because of being closed source or the effort involved would be just as great as re-writing the application from scratch. Neither is letting normal operators restart and reconfigure the applications on backup nodes a solution. In addition to the possibility of mistakes from non-experts and the cost of keeping personnel at work 24/7, these kind of operations would require administrator privileges within the compute environment and would therefore be a security risk. Therefore, these legacy applications have to be monitored and if a failure occurs autonomously migrated to a working node. The pacemaker framework is designed for both tasks and ensures the availability of the legacy applications. Distributed redundant block devices are used for fault tolerant distributed data storage. The result is an Availability Environment Classification 2 (AEC-2).

  1. Scalable and Resilient Middleware to Handle Information Exchange during Environment Crisis

    Science.gov (United States)

    Tao, R.; Poslad, S.; Moßgraber, J.; Middleton, S.; Hammitzsch, M.

    2012-04-01

    The EU FP7 TRIDEC project focuses on enabling real-time, intelligent, information management of collaborative, complex, critical decision processes for earth management. A key challenge is to promote a communication infrastructure to facilitate interoperable environment information services during environment events and crises such as tsunamis and drilling, during which increasing volumes and dimensionality of disparate information sources, including sensor-based and human-based ones, can result, and need to be managed. Such a system needs to support: scalable, distributed messaging; asynchronous messaging; open messaging to handling changing clients such as new and retired automated system and human information sources becoming online or offline; flexible data filtering, and heterogeneous access networks (e.g., GSM, WLAN and LAN). In addition, the system needs to be resilient to handle the ICT system failures, e.g. failure, degradation and overloads, during environment events. There are several system middleware choices for TRIDEC based upon a Service-oriented-architecture (SOA), Event-driven-Architecture (EDA), Cloud Computing, and Enterprise Service Bus (ESB). In an SOA, everything is a service (e.g. data access, processing and exchange); clients can request on demand or subscribe to services registered by providers; more often interaction is synchronous. In an EDA system, events that represent significant changes in state can be processed simply, or as streams or more complexly. Cloud computing is a virtualization, interoperable and elastic resource allocation model. An ESB, a fundamental component for enterprise messaging, supports synchronous and asynchronous message exchange models and has inbuilt resilience against ICT failure. Our middleware proposal is an ESB based hybrid architecture model: an SOA extension supports more synchronous workflows; EDA assists the ESB to handle more complex event processing; Cloud computing can be used to increase and

  2. MongoDB high availability

    CERN Document Server

    Mehrabani, Afshin

    2014-01-01

    This book has a perfect balance of concepts and their practical implementation along with solutions to make a highly available MongoDB server with clear instructions and guidance. If you are using MongoDB in a production environment and need a solution to make a highly available MongoDB server, this book is ideal for you. Familiarity with MongoDB is expected so that you understand the content of this book.

  3. Combining Wireless Sensor Networks and Semantic Middleware for an Internet of Things-Based Sportsman/Woman Monitoring Application

    Directory of Open Access Journals (Sweden)

    Lourdes López

    2013-01-01

    Full Text Available Wireless Sensor Networks (WSNs are spearheading the efforts taken to build and deploy systems aiming to accomplish the ultimate objectives of the Internet of Things. Due to the sensors WSNs nodes are provided with, and to their ubiquity and pervasive capabilities, these networks become extremely suitable for many applications that so-called conventional cabled or wireless networks are unable to handle. One of these still underdeveloped applications is monitoring physical parameters on a person. This is an especially interesting application regarding their age or activity, for any detected hazardous parameter can be notified not only to the monitored person as a warning, but also to any third party that may be helpful under critical circumstances, such as relatives or healthcare centers. We propose a system built to monitor a sportsman/woman during a workout session or performing a sport-related indoor activity. Sensors have been deployed by means of several nodes acting as the nodes of a WSN, along with a semantic middleware development used for hardware complexity abstraction purposes. The data extracted from the environment, combined with the information obtained from the user, will compose the basis of the services that can be obtained.

  4. Developing a middleware to support HDF data access in ArcGIS

    Science.gov (United States)

    Sun, M.; Jiang, Y.; Yang, C. P.

    2014-12-01

    Hierarchical Data Format (HDF) is the standard data format for the NASA Earth Observing System (EOS) data products, like the MODIS level-3 data. These data have been widely used in long-term study of the land surface, biosphere, atmosphere, and oceans of the Earth. Several toolkits have been developed to access HDF data, such as the HDF viewer and Geospatial Data Abstraction Library (GDAL), etc. ArcGIS integrated the GDAL providing data user a Graphical User Interface (GUI) to read HDF data. However, there are still some problems when using the toolkits:for example, 1) the projection information is not recognized correctly, 2) the image is dispalyed inverted, and 3) the tool lacks of capability to read the third dimension information stored in the data subsets, etc. Accordingly, in this study we attempt to improve the current HDF toolkits to address the aformentioned issues. Considering the wide-usage of ArcGIS, we develop a middleware for ArcGIS based on GDAL to solve the particular data access problems happening in ArcGIS, so that data users can access HDF data successfully and perform further data analysis with the ArcGIS geoprocessing tools.

  5. HYDRA: A Middleware-Oriented Integrated Architecture for e-Procurement in Supply Chains

    Science.gov (United States)

    Alor-Hernandez, Giner; Aguilar-Lasserre, Alberto; Juarez-Martinez, Ulises; Posada-Gomez, Ruben; Cortes-Robles, Guillermo; Garcia-Martinez, Mario Alberto; Gomez-Berbis, Juan Miguel; Rodriguez-Gonzalez, Alejandro

    The Service-Oriented Architecture (SOA) development paradigm has emerged to improve the critical issues of creating, modifying and extending solutions for business processes integration, incorporating process automation and automated exchange of information between organizations. Web services technology follows the SOA's principles for developing and deploying applications. Besides, Web services are considered as the platform for SOA, for both intra- and inter-enterprise communication. However, an SOA does not incorporate information about occurring events into business processes, which are the main features of supply chain management. These events and information delivery are addressed in an Event-Driven Architecture (EDA). Taking this into account, we propose a middleware-oriented integrated architecture that offers a brokering service for the procurement of products in a Supply Chain Management (SCM) scenario. As salient contributions, our system provides a hybrid architecture combining features of both SOA and EDA and a set of mechanisms for business processes pattern management, monitoring based on UML sequence diagrams, Web services-based management, event publish/subscription and reliable messaging service.

  6. The Path to Convergence: Design, Coordination and Social Issues in the Implementation of a Middleware Data Broker.

    Science.gov (United States)

    Slota, S.; Khalsa, S. J. S.

    2015-12-01

    Infrastructures are the result of systems, networks, and inter-networks that accrete, overlay and segment one another over time. As a result, working infrastructures represent a broad heterogeneity of elements - data types, computational resources, material substrates (computing hardware, physical infrastructure, labs, physical information resources, etc.) as well as organizational and social functions which result in divergent outputs and goals. Cyber infrastructure's engineering often defaults to a separation of the social from the technical that results in the engineering succeeding in limited ways, or the exposure of unanticipated points of failure within the system. Studying the development of middleware intended to mediate interactions among systems within an earth systems science infrastructure exposes organizational, technical and standards-focused negotiations endemic to a fundamental trait of infrastructure: its characteristic invisibility in use. Intended to perform a core function within the EarthCube cyberinfrastructure, the development, governance and maintenance of an automated brokering system is a microcosm of large-scale infrastructural efforts. Points of potential system failure, regardless of the extent to which they are more social or more technical in nature, can be considered in terms of the reverse salient: a point of social and material configuration that momentarily lags behind the progress of an emerging or maturing infrastructure. The implementation of the BCube data broker has exposed reverse salients in regards to the overall EarthCube infrastructure (and the role of middleware brokering) in the form of organizational factors such as infrastructural alignment, maintenance and resilience; differing and incompatible practices of data discovery and evaluation among users and stakeholders; and a preponderance of local variations in the implementation of standards and authentication in data access. These issues are characterized by their

  7. GeoSearch: A lightweight broking middleware for geospatial resources discovery

    Science.gov (United States)

    Gui, Z.; Yang, C.; Liu, K.; Xia, J.

    2012-12-01

    With petabytes of geodata, thousands of geospatial web services available over the Internet, it is critical to support geoscience research and applications by finding the best-fit geospatial resources from the massive and heterogeneous resources. Past decades' developments witnessed the operation of many service components to facilitate geospatial resource management and discovery. However, efficient and accurate geospatial resource discovery is still a big challenge due to the following reasons: 1)The entry barriers (also called "learning curves") hinder the usability of discovery services to end users. Different portals and catalogues always adopt various access protocols, metadata formats and GUI styles to organize, present and publish metadata. It is hard for end users to learn all these technical details and differences. 2)The cost for federating heterogeneous services is high. To provide sufficient resources and facilitate data discovery, many registries adopt periodic harvesting mechanism to retrieve metadata from other federated catalogues. These time-consuming processes lead to network and storage burdens, data redundancy, and also the overhead of maintaining data consistency. 3)The heterogeneous semantics issues in data discovery. Since the keyword matching is still the primary search method in many operational discovery services, the search accuracy (precision and recall) is hard to guarantee. Semantic technologies (such as semantic reasoning and similarity evaluation) offer a solution to solve these issues. However, integrating semantic technologies with existing service is challenging due to the expandability limitations on the service frameworks and metadata templates. 4)The capabilities to help users make final selection are inadequate. Most of the existing search portals lack intuitive and diverse information visualization methods and functions (sort, filter) to present, explore and analyze search results. Furthermore, the presentation of the value

  8. Applying a message oriented middleware architecture to the TJ-II remote participation system

    International Nuclear Information System (INIS)

    Sanchez, E.; Portas, A.; Pereira, A.; Vega, J.

    2006-01-01

    A message oriented middleware (MOM) has been introduced into the TJ-II data acquisition system to on-line distribute information. Java message service (JMS) has been chosen as the messaging application program interface (API) in order to ensure multiplatform portability. A library of C++ classes providing interface for JMS Java classes has been developed. This allows C++ programs to inter-communicate through JMS. In addition, a set of C wrapper functions has also been developed to provide basic messaging functionalities for C or FORTRAN programs. These functions are used in TJ-II LabView data acquisition applications. Several software applications that take advantage of the MOM architecture have been developed. Firstly, a general-user application allows monitoring of the data acquisition systems. Secondly, a simple application permits the visualization of TJ-II monitor signals with on-line data refreshing. These applications are written in the Java language, thereby ensuring its portability. These software tools provide new functionalities to the TJ-II remote participation system and are equally used in the local environment

  9. Applying a message oriented middleware architecture to the TJ-II remote participation system

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, E. [Asociacion EURATOM/CIEMAT para Fusion, Avda Complutense 22, 28040 Madrid (Spain)]. E-mail: edi.sanchez@ciemat.es; Portas, A. [Asociacion EURATOM/CIEMAT para Fusion, Avda Complutense 22, 28040 Madrid (Spain); Pereira, A. [Asociacion EURATOM/CIEMAT para Fusion, Avda Complutense 22, 28040 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, Avda Complutense 22, 28040 Madrid (Spain)

    2006-07-15

    A message oriented middleware (MOM) has been introduced into the TJ-II data acquisition system to on-line distribute information. Java message service (JMS) has been chosen as the messaging application program interface (API) in order to ensure multiplatform portability. A library of C++ classes providing interface for JMS Java classes has been developed. This allows C++ programs to inter-communicate through JMS. In addition, a set of C wrapper functions has also been developed to provide basic messaging functionalities for C or FORTRAN programs. These functions are used in TJ-II LabView data acquisition applications. Several software applications that take advantage of the MOM architecture have been developed. Firstly, a general-user application allows monitoring of the data acquisition systems. Secondly, a simple application permits the visualization of TJ-II monitor signals with on-line data refreshing. These applications are written in the Java language, thereby ensuring its portability. These software tools provide new functionalities to the TJ-II remote participation system and are equally used in the local environment.

  10. High Availability in Optical Networks

    Science.gov (United States)

    Grover, Wayne D.; Wosinska, Lena; Fumagalli, Andrea

    2005-09-01

    Call for Papers: High Availability in Optical Networks Submission Deadline: 1 January 2006 The Journal of Optical Networking (JON) is soliciting papers for a feature Issue pertaining to all aspects of reliable components and systems for optical networks and concepts, techniques, and experience leading to high availability of services provided by optical networks. Most nations now recognize that telecommunications in all its forms -- including voice, Internet, video, and so on -- are "critical infrastructure" for the society, commerce, government, and education. Yet all these services and applications are almost completely dependent on optical networks for their realization. "Always on" or apparently unbreakable communications connectivity is the expectation from most users and for some services is the actual requirement as well. Achieving the desired level of availability of services, and doing so with some elegance and efficiency, is a meritorious goal for current researchers. This requires development and use of high-reliability components and subsystems, but also concepts for active reconfiguration and capacity planning leading to high availability of service through unseen fast-acting survivability mechanisms. The feature issue is also intended to reflect some of the most important current directions and objectives in optical networking research, which include the aspects of integrated design and operation of multilevel survivability and realization of multiple Quality-of-Protection service classes. Dynamic survivable service provisioning, or batch re-provisioning is an important current theme, as well as methods that achieve high availability at far less investment in spare capacity than required by brute force service path duplication or 100% redundant rings, which is still the surprisingly prevalent practice. Papers of several types are envisioned in the feature issue, including outlook and forecasting types of treatments, optimization and analysis, new

  11. Running high availability services in hybrid cloud

    OpenAIRE

    Dzekunskas, Karolis

    2018-01-01

    IT infrastructure is now expanding rapidly. Many enterprises are thinking of migration to the cloud to increase the time of service availability. High availability services and advanced technologies let to find flexible and scalable balance between resources and costs. The aim of this work is to prove that high availability services in hybrid cloud are secure, flexible, optimized and available to anyone. This paperwork provides detailed explanation about the imitation of two datacenters with ...

  12. Streamlined sign-out of capillary protein electrophoresis using middleware and an open-source macro application

    Directory of Open Access Journals (Sweden)

    Gagan Mathur

    2014-01-01

    Full Text Available Background: Interfacing of clinical laboratory instruments with the laboratory information system (LIS via "middleware" software is increasingly common. Our clinical laboratory implemented capillary electrophoresis using a Sebia; Capillarys-2™ (Norcross, GA, USA instrument for serum and urine protein electrophoresis. Using Data Innovations Instrument Manager, an interface was established with the LIS (Cerner that allowed for bi-directional transmission of numeric data. However, the text of the interpretive pathology report was not properly transferred. To reduce manual effort and possibility for error in text data transfer, we developed scripts in AutoHotkey, a free, open-source macro-creation and automation software utility. Materials and Methods: Scripts were written to create macros that automated mouse and key strokes. The scripts retrieve the specimen accession number, capture user input text, and insert the text interpretation in the correct patient record in the desired format. Results: The scripts accurately and precisely transfer narrative interpretation into the LIS. Combined with bar-code reading by the electrophoresis instrument, the scripts transfer data efficiently to the correct patient record. In addition, the AutoHotKey script automated repetitive key strokes required for manual entry into the LIS, making protein electrophoresis sign-out easier to learn and faster to use by the pathology residents. Scripts allow for either preliminary verification by residents or final sign-out by the attending pathologist. Conclusions: Using the open-source AutoHotKey software, we successfully improved the transfer of text data between capillary electrophoresis software and the LIS. The use of open-source software tools should not be overlooked as tools to improve interfacing of laboratory instruments.

  13. High-Level Development of Multiserver Online Games

    Directory of Open Access Journals (Sweden)

    Frank Glinka

    2008-01-01

    Full Text Available Multiplayer online games with support for high user numbers must provide mechanisms to support an increasing amount of players by using additional resources. This paper provides a comprehensive analysis of the practically proven multiserver distribution mechanisms, zoning, instancing, and replication, and the tasks for the game developer implied by them. We propose a novel, high-level development approach which integrates the three distribution mechanisms seamlessly in today's online games. As a possible base for this high-level approach, we describe the real-time framework (RTF middleware system which liberates the developer from low-level tasks and allows him to stay at high level of design abstraction. We explain how RTF supports the implementation of single-server online games and how RTF allows to incorporate the three multiserver distribution mechanisms during the development process. Finally, we describe briefly how RTF provides manageability and maintenance functionality for online games in a grid context with dynamic resource allocation scenarios.

  14. Desarrollo de un robot móvil compacto integrado en el middleware ROS

    Directory of Open Access Journals (Sweden)

    André Araújo

    2014-07-01

    Full Text Available Resumen: En este trabajo se presenta el robot TraxBot y su integración completa en el Robot Operating System (ROS. El TraxBot es una plataforma de robótica móvil, desarrollada y ensamblada en el Instituto de Sistemas y Robótica (ISR Coimbra. El objetivo de este trabajo es reducir drásticamente el tiempo de desarrollo, proporcionando abstracción de hardware y modos de operación intuitiva, permitiendo a los investigadores centrarse en sus motivaciones principales de investigación, por ejemplo, la búsqueda y rescate con múltiples robots o robótica de enjambres. Se describen las potencialidades del TraxBot, que combinado con un controlador de ROS específicamente desarrollado, facilita el uso de varias herramientas para el análisis de datos y la interacción entre múltiples robots, sensores y dispositivos de teleoperación. Para validar el sistema, se llevaron a cabo diversas pruebas experimentales utilizando robots reales y virtuales. Abstract: This paper presents the TraxBot robot and its full integration in the Robotic Operating System (ROS. The TraxBot is a compact mobile robotic platform developed and assembled at the Institute of Systems and Robots (ISR Coimbra. The goal in this work is to drastically decrease the development time, providing hardware abstraction and intuitive operation modes, allowing researchers to focus in their main research motivations, e.g., search and rescue, multi-robot surveillance or swarm robotics. The potentialities of the TraxBot are described which, combined with the ROS driver developed, provide several tools for data analysis and easiness of interaction between multiple robots, sensors and tele-operation devices. To validate the approach, diverse experimental tests using real and virtual simulated robots were conducted. Palabras clave: ROS, robot móvil, sistemas embebidos, diseño, middleware, montaje y test., Keywords: ROS, mobile robot, Arduino, embedded system, design, assembling and testing.

  15. Hiding the Complexity: Building a Distributed ATLAS Tier-2 with a Single Resource Interface using ARC Middleware

    International Nuclear Information System (INIS)

    Purdie, S; Stewart, G; Skipsey, S; Washbrook, A; Bhimji, W; Filipcic, A; Kenyon, M

    2011-01-01

    Since their inception, Grids for high energy physics have found management of data to be the most challenging aspect of operations. This problem has generally been tackled by the experiment's data management framework controlling in fine detail the distribution of data around the grid and the careful brokering of jobs to sites with co-located data. This approach, however, presents experiments with a difficult and complex system to manage as well as introducing a rigidity into the framework which is very far from the original conception of the grid. In this paper we describe how the ScotGrid distributed Tier-2, which has sites in Glasgow, Edinburgh and Durham, was presented to ATLAS as a single, unified resource using the ARC middleware stack. In this model the ScotGrid 'data store' is hosted at Glasgow and presented as a single ATLAS storage resource. As jobs are taken from the ATLAS PanDA framework, they are dispatched to the computing cluster with the fastest response time. An ARC compute element at each site then asynchronously stages the data from the data store into a local cache hosted at each site. The job is then launched in the batch system and accesses data locally. We discuss the merits of this system compared to other operational models and consider, from the point of view of the resource providers (sites), and from the resource consumers (experiments); and consider issues involved in transitions to this model.

  16. Microsoft Exchange Server 2013 high availability

    CERN Document Server

    Mota, Nuno

    2014-01-01

    This book is a hands-on practical guide that provides the reader with a number of clear scenarios and examples, making it easier to understand and apply the new concepts. Each chapter can be used as a reference, or it can be read from beginning to end, allowing consultants/administrators to build a solid and highly available Exchange 2013 environment. If you are a messaging professional who wants to learn to design a highly available Exchange 2013 environment, this book is for you. Although not a definite requirement, practical experience with Exchange 2010 is expected, without being a subject

  17. Adoption of a SAML-XACML Profile for Authorization Interoperability across Grid Middleware in OSG and EGEE

    International Nuclear Information System (INIS)

    Garzoglio, G; Chadwick, K; Dykstra, D; Hesselroth, T; Levshina, T; Sharma, N; Timm, S; Bester, J; Martin, S; Groep, D; Koeroo, O; Salle, M; Verstegen, A; Gu, J; Sim, A

    2011-01-01

    The Authorization Interoperability activity was initiated in 2006 to foster interoperability between middleware and authorization infrastructures deployed in the Open Science Grid (OSG) and the Enabling Grids for E-sciencE (EGEE) projects. This activity delivered a common authorization protocol and a set of libraries that implement that protocol. In addition, a set of the most common Grid gateways, or Policy Enforcement Points (Globus Toolkit v4 Gatekeeper, GridFTP, dCache, etc.) and site authorization services, or Policy Decision Points (LCAS/LCMAPS, SCAS, GUMS, etc.) have been integrated with these libraries. At this time, various software providers, including the Globus Toolkit v5, BeStMan, and the Site AuthoriZation service (SAZ), are integrating the authorization interoperability protocol with their products. In addition, as more and more software supports the same protocol, the community is converging on LCMAPS as a common module for identity attribute parsing and authorization call-out. This paper presents this effort, discusses the status of adoption of the common protocol and projects the community work on authorization in the near future.

  18. Enhancing the Internet with the CONVERGENCE system an information-centric network coupled with a standard middleware

    CERN Document Server

    Andrade, Maria; Melazzi, Nicola; Walker, Richard; Hussmann, Heinrich; Venieris, Iakovos

    2014-01-01

    Convergence proposes the enhancement of the Internet with a novel, content-centric, publish–subscribe service model based on the versatile digital item (VDI): a common container for all kinds of digital content, including digital representations of real-world resources. VDIs will serve the needs of the future Internet, providing a homogeneous method for handling structured information, incorporating security and privacy mechanisms. CONVERGENCE subsumes the following areas of research: ·         definition of the VDI as a new fundamental unit of distribution and transaction; ·         content-centric networking functionality to complement or replace IP-address-based routing; ·         security and privacy protection mechanisms; ·         open-source middleware, including a community dictionary service to enable rich semantic searches; ·         applications, tested under real-life conditions. This book shows how CONVERGENCE allows publishing, searching and subscri...

  19. Adoption of a SAML-XACML profile for authorization interoperability across grid middleware in OSG and EGEE

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, G. [Fermilab; Bester, J. [Argonne; Chadwick, K. [Fermilab; Dykstra, D. [Fermilab; Groep, D. [NIKHEF, Amsterdam; Gu, J. [LBL, Berkeley; Hesselroth, T. [Fermilab; Koeroo, O. [NIKHEF, Amsterdam; Levshina, T. [Fermilab; Martin, S. [Argonne; Salle, M. [NIKHEF, Amsterdam; Sharma, N. [Fermilab; Sim, A. [LBL, Berkeley; Timm, S. [Fermilab; Verstegen, A. [NIKHEF, Amsterdam

    2011-01-01

    The Authorization Interoperability activity was initiated in 2006 to foster interoperability between middleware and authorization infrastructures deployed in the Open Science Grid (OSG) and the Enabling Grids for E-sciencE (EGEE) projects. This activity delivered a common authorization protocol and a set of libraries that implement that protocol. In addition, a set of the most common Grid gateways, or Policy Enforcement Points (Globus Toolkit v4 Gatekeeper, GridFTP, dCache, etc.) and site authorization services, or Policy Decision Points (LCAS/LCMAPS, SCAS, GUMS, etc.) have been integrated with these libraries. At this time, various software providers, including the Globus Toolkit v5, BeStMan, and the Site AuthoriZation service (SAZ), are integrating the authorization interoperability protocol with their products. In addition, as more and more software supports the same protocol, the community is converging on LCMAPS as a common module for identity attribute parsing and authorization call-out. This paper presents this effort, discusses the status of adoption of the common protocol and projects the community work on authorization in the near future.

  20. Availability of high quality weather data measurements

    DEFF Research Database (Denmark)

    Andersen, Elsa; Johansen, Jakob Berg; Furbo, Simon

    In the period 2016-2017 the project “Availability of high quality weather data measurements” is carried out at Department of Civil Engineering at the Technical University of Denmark. The aim of the project is to establish measured high quality weather data which will be easily available...... for the building energy branch and the solar energy branch in their efforts to achieve energy savings and for researchers and students carrying out projects where measured high quality weather data are needed....

  1. Realizing IoT service's policy privacy over publish/subscribe-based middleware.

    Science.gov (United States)

    Duan, Li; Zhang, Yang; Chen, Shiping; Wang, Shiyao; Cheng, Bo; Chen, Junliang

    2016-01-01

    The publish/subscribe paradigm makes IoT service collaborations more scalable and flexible, due to the space, time and control decoupling of event producers and consumers. Thus, the paradigm can be used to establish large-scale IoT service communication infrastructures such as Supervisory Control and Data Acquisition systems. However, preserving IoT service's policy privacy is difficult in this paradigm, because a classical publisher has little control of its own event after being published; and a subscriber has to accept all the events from the subscribed event type with no choice. Few existing publish/subscribe middleware have built-in mechanisms to address the above issues. In this paper, we present a novel access control framework, which is capable of preserving IoT service's policy privacy. In particular, we adopt the publish/subscribe paradigm as the IoT service communication infrastructure to facilitate the protection of IoT services policy privacy. The key idea in our policy-privacy solution is using a two-layer cooperating method to match bi-directional privacy control requirements: (a) data layer for protecting IoT events; and (b) application layer for preserving the privacy of service policy. Furthermore, the anonymous-set-based principle is adopted to realize the functionalities of the framework, including policy embedding and policy encoding as well as policy matching. Our security analysis shows that the policy privacy framework is Chosen-Plaintext Attack secure. We extend the open source Apache ActiveMQ broker by building into a policy-based authorization mechanism to enforce the privacy policy. The performance evaluation results indicate that our approach is scalable with reasonable overheads.

  2. High Availability of RAPIENET

    International Nuclear Information System (INIS)

    Yoon, G.; Oh, J. S.; Kwon, D. H.; Kwon, S. C.; Park, Y. O.

    2012-01-01

    Many industrial customers are no longer satisfies with conventional Ethernet-based communications. They require a more accurate, more flexible, and more reliable technology for their control and measurement systems. Hence, Ethernet-based high-availability networks are becoming an important topic in the control and measurement fields. In this paper, we introduce a new redundant programmable logic controller (PLC) concept, based on real-time automation protocols for industrial Ethernet (RAPIEnet). RAPIEnet has intrinsic redundancy built into its network topology, with hardware-based recovery time. We define a redundant PLC system switching model and demonstrate its performance, including RAPIEnet recovery time

  3. High performance distributed objects in large hadron collider experiments

    International Nuclear Information System (INIS)

    Gutleber, J.

    1999-11-01

    This dissertation demonstrates how object-oriented technology can support the development of software that has to meet the requirements of high performance distributed data acquisition systems. The environment for this work is a system under planning for the Compact Muon Solenoid experiment at CERN that shall start its operation in the year 2005. The long operational phase of the experiment together with a tight and puzzling interaction with custom devices make the quest for an evolvable architecture that exhibits a high level of abstraction the driving issue. The question arises if an existing approach already fits our needs. The presented work casts light on these problems and as a result comprises the following novel contributions: - Application of object technology at hardware/software boundary. Software components at this level must be characterised by high efficiency and extensibility at the same time. - Identification of limitations when deploying commercial-off-the-shelf middleware for distributed object-oriented computing. - Capturing of software component properties in an efficiency model for ease of comparison and improvement. - Proof of feasibility that the encountered deficiencies in middleware can be avoided and that with the use of software components the imposed requirements can be met. - Design and implementation of an on-line software control system that allows to take into account the ever evolving requirements by avoiding hardwired policies. We conclude that state-of-the-art middleware cannot meet the required efficiency of the planned data acquisition system. Although new tool generations already provide a certain degree of configurability, the obligation to follow standards specifications does not allow the necessary optimisations. We identified the major limiting factors and argue that a custom solution following a component model with narrow interfaces can satisfy our requirements. This approach has been adopted for the current design

  4. Ultrascale collaborative visualization using a display-rich global cyberinfrastructure.

    Science.gov (United States)

    Jeong, Byungil; Leigh, Jason; Johnson, Andrew; Renambot, Luc; Brown, Maxine; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung

    2010-01-01

    The scalable adaptive graphics environment (SAGE) is high-performance graphics middleware for ultrascale collaborative visualization using a display-rich global cyberinfrastructure. Dozens of sites worldwide use this cyberinfrastructure middleware, which connects high-performance-computing resources over high-speed networks to distributed ultraresolution displays.

  5. SOMM: A new service oriented middleware for generic wireless multimedia sensor networks based on code mobility.

    Science.gov (United States)

    Faghih, Mohammad Mehdi; Moghaddam, Mohsen Ebrahimi

    2011-01-01

    Although much research in the area of Wireless Multimedia Sensor Networks (WMSNs) has been done in recent years, the programming of sensor nodes is still time-consuming and tedious. It requires expertise in low-level programming, mainly because of the use of resource constrained hardware and also the low level API provided by current operating systems. The code of the resulting systems has typically no clear separation between application and system logic. This minimizes the possibility of reusing code and often leads to the necessity of major changes when the underlying platform is changed. In this paper, we present a service oriented middleware named SOMM to support application development for WMSNs. The main goal of SOMM is to enable the development of modifiable and scalable WMSN applications. A network which uses the SOMM is capable of providing multiple services to multiple clients at the same time with the specified Quality of Service (QoS). SOMM uses a virtual machine with the ability to support mobile agents. Services in SOMM are provided by mobile agents and SOMM also provides a t space on each node which agents can use to communicate with each other.

  6. The Program for Climate Model Diagnosis and Intercomparison (PCMDI) Software Development: Applications, Infrastructure, and Middleware/Networks

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-30

    The status of and future plans for the Program for Climate Model Diagnosis and Intercomparison (PCMDI) hinge on software that PCMDI is either currently distributing or plans to distribute to the climate community in the near future. These software products include standard conventions, national and international federated infrastructures, and community analysis and visualization tools. This report also mentions other secondary software not necessarily led by or developed at PCMDI to provide a complete picture of the overarching applications, infrastructures, and middleware/networks. Much of the software described anticipates the use of future technologies envisioned over the span of next year to 10 years. These technologies, together with the software, will be the catalyst required to address extreme-scale data warehousing, scalability issues, and service-level requirements for a diverse set of well-known projects essential for predicting climate change. These tools, unlike the previous static analysis tools of the past, will support the co-existence of many users in a productive, shared virtual environment. This advanced technological world driven by extreme-scale computing and the data it generates will increase scientists’ productivity, exploit national and international relationships, and push research to new levels of understanding.

  7. An Optimized, Data Distribution Service-Based Solution for Reliable Data Exchange Among Autonomous Underwater Vehicles

    Directory of Open Access Journals (Sweden)

    Jesús Rodríguez-Molina

    2017-08-01

    Full Text Available Major challenges are presented when managing a large number of heterogeneous vehicles that have to communicate underwater in order to complete a global mission in a cooperative manner. In this kind of application domain, sending data through the environment presents issues that surpass the ones found in other overwater, distributed, cyber-physical systems (i.e., low bandwidth, unreliable transport medium, data representation and hardware high heterogeneity. This manuscript presents a Publish/Subscribe-based semantic middleware solution for unreliable scenarios and vehicle interoperability across cooperative and heterogeneous autonomous vehicles. The middleware relies on different iterations of the Data Distribution Service (DDS software standard and their combined work between autonomous maritime vehicles and a control entity. It also uses several components with different functionalities deemed as mandatory for a semantic middleware architecture oriented to maritime operations (device and service registration, context awareness, access to the application layer where other technologies are also interweaved with middleware (wireless communications, acoustic networks. Implementation details and test results, both in a laboratory and a deployment scenario, have been provided as a way to assess the quality of the system and its satisfactory performance.

  8. An architecture for managing knowledge and system dynamism in the worldwide sensor web

    CSIR Research Space (South Africa)

    Moodley, D

    2012-03-01

    Full Text Available Sensor Web researchers are currently investigating middleware to aid in the dynamic discovery, integration and analysis of vast quantities of both high and low quality, but distributed and heterogeneous earth observation data. Key challenges being...

  9. Implementasi Highly Available Website Dengan Distributed Replicated Block Device

    Directory of Open Access Journals (Sweden)

    Mulyanto Mulyanto

    2016-07-01

    Full Text Available As an important IT infrastructure, website is a system which requires high reliability and availability levels. Website meets the criteria as a highly available system because website must provide services to clients in real time, handle a large amount of data, and not lose data during transaction. A highly available system must meet the condition of being able to run continuously as well as guaranteeing consistency on data requests. This study designed a website with high availability. The approach was building network cluster with failover and replicated block device functions. Failover was built to provide service availability, while replicated block device provides data consistency during failure of service.  With failover cluster and replicated block device approaches, a cluster which is able to handle service failures of web server and database server on the website. The result of this study was the services of the website could run well if there was any failure in node members of the cluster. The system was able to provide 99,999 (five nines availability on database server services and 99,98  (three nines on web server services.

  10. Combining wireless sensor networks and semantic middleware for an Internet of Things-based sportsman/woman monitoring application.

    Science.gov (United States)

    Rodríguez-Molina, Jesús; Martínez, José-Fernán; Castillejo, Pedro; López, Lourdes

    2013-01-31

    Wireless Sensor Networks (WSNs) are spearheading the efforts taken to build and deploy systems aiming to accomplish the ultimate objectives of the Internet of Things. Due to the sensors WSNs nodes are provided with, and to their ubiquity and pervasive capabilities, these networks become extremely suitable for many applications that so-called conventional cabled or wireless networks are unable to handle. One of these still underdeveloped applications is monitoring physical parameters on a person. This is an especially interesting application regarding their age or activity, for any detected hazardous parameter can be notified not only to the monitored person as a warning, but also to any third party that may be helpful under critical circumstances, such as relatives or healthcare centers. We propose a system built to monitor a sportsman/woman during a workout session or performing a sport-related indoor activity. Sensors have been deployed by means of several nodes acting as the nodes of a WSN, along with a semantic middleware development used for hardware complexity abstraction purposes. The data extracted from the environment, combined with the information obtained from the user, will compose the basis of the services that can be obtained.

  11. Combining Wireless Sensor Networks and Semantic Middleware for an Internet of Things-Based Sportsman/Woman Monitoring Application

    Science.gov (United States)

    Rodríguez-Molina, Jesús; Martínez, José-Fernán; Castillejo, Pedro; López, Lourdes

    2013-01-01

    Wireless Sensor Networks (WSNs) are spearheading the efforts taken to build and deploy systems aiming to accomplish the ultimate objectives of the Internet of Things. Due to the sensors WSNs nodes are provided with, and to their ubiquity and pervasive capabilities, these networks become extremely suitable for many applications that so-called conventional cabled or wireless networks are unable to handle. One of these still underdeveloped applications is monitoring physical parameters on a person. This is an especially interesting application regarding their age or activity, for any detected hazardous parameter can be notified not only to the monitored person as a warning, but also to any third party that may be helpful under critical circumstances, such as relatives or healthcare centers. We propose a system built to monitor a sportsman/woman during a workout session or performing a sport-related indoor activity. Sensors have been deployed by means of several nodes acting as the nodes of a WSN, along with a semantic middleware development used for hardware complexity abstraction purposes. The data extracted from the environment, combined with the information obtained from the user, will compose the basis of the services that can be obtained. PMID:23385405

  12. High plant availability of phosphorus and low availability of cadmium in four biomass combustion ashes

    International Nuclear Information System (INIS)

    Li, Xiaoxi; Rubæk, Gitte H.; Sørensen, Peter

    2016-01-01

    For biomass combustion to become a sustainable energy production system, it is crucial to minimise landfill of biomass ashes, to recycle the nutrients and to minimise the undesirable impact of hazardous substances in the ash. In order to test the plant availability of phosphorus (P) and cadmium (Cd) in four biomass ashes, we conducted two pot experiments on a P-depleted soil and one mini-plot field experiment on a soil with adequate P status. Test plants were spring barley and Italian ryegrass. Ash applications were compared to triple superphosphate (TSP) and a control without P application. Both TSP and ash significantly increased crop yields and P uptake on the P-depleted soil. In contrast, on the adequate-P soil, the barley yield showed little response to soil amendment, even at 300–500 kg P ha"−"1 application, although the barley took up more P at higher applications. The apparent P use efficiency of the additive was 20% in ryegrass - much higher than that of barley for which P use efficiencies varied on the two soils. Generally, crop Cd concentrations were little affected by the increasing and high applications of ash, except for relatively high Cd concentrations in barley after applying 25 Mg ha"−"1 straw ash. Contrarily, even modest increases in the TSP application markedly increased Cd uptake in plants. This might be explained by the low Cd solubility in the ash or by the reduced Cd availability due to the liming effect of ash. High concentrations of resin-extractable P (available P) in the ash-amended soil after harvest indicate that the ash may also contribute to P availability for the following crops. In conclusion, the biomass ashes in this study had P availability similar to the TSP fertiliser and did not contaminate the crop with Cd during the first year. - Highlights: • Effects of four biomass ashes vs. a P fertiliser (TSP) on two crops were studied. • Ashes increased crop yields with P availability similar to TSP on P-depleted soil.

  13. High plant availability of phosphorus and low availability of cadmium in four biomass combustion ashes

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoxi, E-mail: Xiaoxi.Li@agro.au.dk; Rubæk, Gitte H.; Sørensen, Peter

    2016-07-01

    For biomass combustion to become a sustainable energy production system, it is crucial to minimise landfill of biomass ashes, to recycle the nutrients and to minimise the undesirable impact of hazardous substances in the ash. In order to test the plant availability of phosphorus (P) and cadmium (Cd) in four biomass ashes, we conducted two pot experiments on a P-depleted soil and one mini-plot field experiment on a soil with adequate P status. Test plants were spring barley and Italian ryegrass. Ash applications were compared to triple superphosphate (TSP) and a control without P application. Both TSP and ash significantly increased crop yields and P uptake on the P-depleted soil. In contrast, on the adequate-P soil, the barley yield showed little response to soil amendment, even at 300–500 kg P ha{sup −1} application, although the barley took up more P at higher applications. The apparent P use efficiency of the additive was 20% in ryegrass - much higher than that of barley for which P use efficiencies varied on the two soils. Generally, crop Cd concentrations were little affected by the increasing and high applications of ash, except for relatively high Cd concentrations in barley after applying 25 Mg ha{sup −1} straw ash. Contrarily, even modest increases in the TSP application markedly increased Cd uptake in plants. This might be explained by the low Cd solubility in the ash or by the reduced Cd availability due to the liming effect of ash. High concentrations of resin-extractable P (available P) in the ash-amended soil after harvest indicate that the ash may also contribute to P availability for the following crops. In conclusion, the biomass ashes in this study had P availability similar to the TSP fertiliser and did not contaminate the crop with Cd during the first year. - Highlights: • Effects of four biomass ashes vs. a P fertiliser (TSP) on two crops were studied. • Ashes increased crop yields with P availability similar to TSP on P-depleted soil

  14. Distributed Data Management on the Petascale using Heterogeneous Grid Infrastructures with DQ2

    CERN Document Server

    Branco, M; Salgado, P; Lassnig, M

    2008-01-01

    We describe Don Quijote 2 (DQ2), a new approach to the management of large scientific datasets by a dedicated middleware. This middleware is designed to handle the data organisation and data movement on the petascale for the High-Energy Physics Experiment ATLAS at CERN. DQ2 is able to maintain a well-defined quality of service in a scalable way, guarantees data consistency for the collaboration and bridges the gap between EGEE, OSG and NorduGrid infrastructures to enable true interoperability. DQ2 is specifically designed to support the access and management of large scientific datasets produced by the ATLAS experiment using heterogeneous Grid infrastructures. The DQ2 middleware manages those datasets with global services, local site services and enduser interfaces. The global services, or central catalogues, are responsible for the mapping of individual files onto DQ2 datasets. The local site services are responsible for tracking files available on-site, managing data movement and guaranteeing consistency of...

  15. Design and reliability, availability, maintainability, and safety analysis of a high availability quadruple vital computer system

    Institute of Scientific and Technical Information of China (English)

    Ping TAN; Wei-ting HE; Jia LIN; Hong-ming ZHAO; Jian CHU

    2011-01-01

    With the development of high-speed railways in China,more than 2000 high-speed trains will be put into use.Safety and efficiency of railway transportation is increasingly important.We have designed a high availability quadruple vital computer (HAQVC) system based on the analysis of the architecture of the traditional double 2-out-of-2 system and 2-out-of-3 system.The HAQVC system is a system with high availability and safety,with prominent characteristics such as fire-new internal architecture,high efficiency,reliable data interaction mechanism,and operation state change mechanism.The hardware of the vital CPU is based on ARM7 with the real-time embedded safe operation system (ES-OS).The Markov modeling method is designed to evaluate the reliability,availability,maintainability,and safety (RAMS) of the system.In this paper,we demonstrate that the HAQVC system is more reliable than the all voting triple modular redundancy (AVTMR) system and double 2-out-of-2 system.Thus,the design can be used for a specific application system,such as an airplane or high-speed railway system.

  16. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Weissman, Jon B

    2006-04-30

    High performance computational science and engineering simulations have become an increasingly important part of the scientist's problem solving toolset. A key reason is the development of widely used codes and libraries that support these applications, for example, Netlib, a collection of numerical libraries [33]. The term community codes refers to those libraries or applications that have achieved some critical level of acceptance by a user community. Many of these applications are on the high-end in terms of required resources: computation, storage, and communication. Recently, there has been considerable interest in putting such applications on-line and packaging them as network services to make them available to a wider user base. Applications such as data mining [22], theorem proving and logic [14], parallel numerical computation [8][32] are example services that are all going on-line. Transforming applications into services has been made possible by advances in packaging and interface technologies including component systems [2][6][13][28][37], proposed communication standards [34], and newer Web technologies such as Web Services [38]. Network services allow the user to focus on their application and obtain remote service when needed by simply invoking the service across the network. The user can be assured that the most recent version of the code or service is always provided and they do not need to install, maintain, and manage significant infrastructure to access the service. For high performance applications in particular, the user is still often required to install a code base (e.g. MPI), and therefore become involved with the tedious details of infrastructure management. In the network service model, the service provider is responsible for all of these activities and not the user. The user need not become an expert in high performance computing. An additional advantage of high-end network services is that the user need not have specialized

  17. Article I. Multi-platform Automated Software Building and Packaging

    International Nuclear Information System (INIS)

    Rodriguez, A Abad; Gomes Gouveia, V E; Meneses, D; Capannini, F; Aimar, A; Di Meglio, A

    2012-01-01

    One of the major goals of the EMI (European Middleware Initiative) project is the integration of several components of the pre-existing middleware (ARC, gLite, UNICORE and dCache) into a single consistent set of packages with uniform distributions and repositories. Those individual middleware projects have been developed in the last decade by tens of development teams and before EMI were all built and tested using different tools and dedicated services. The software, millions of lines of code, is written in several programming languages and supports multiple platforms. Therefore a viable solution ought to be able to build and test applications on multiple programming languages using common dependencies on all selected platforms. It should, in addition, package the resulting software in formats compatible with the popular Linux distributions, such as Fedora and Debian, and store them in repositories from which all EMI software can be accessed and installed in a uniform way. Despite this highly heterogeneous initial situation, a single common solution, with the aim of quickly automating the integration of the middleware products, had to be selected and implemented in a few months after the beginning of the EMI project. Because of the previous knowledge and the short time available in which to provide this common solution, the ETICS service, where the gLite middleware was already built for years, was selected. This contribution describes how the team in charge of providing a common EMI build and packaging infrastructure to the whole project has developed a homogeneous solution for releasing and packaging the EMI components from the initial set of tools used by the earlier middleware projects. An important element of the presentation is the developers experience and feedback on converging on ETICS and on the on-going work in order to finally add more widely used and supported build and packaging solutions of the Linux platforms

  18. New developments in the CREAM Computing Element

    International Nuclear Information System (INIS)

    Andreetto, Paolo; Bertocco, Sara; Dorigo, Alvise; Capannini, Fabio; Cecchi, Marco; Zangrando, Luigi

    2012-01-01

    The EU-funded project EMI aims at providing a unified, standardized, easy to install software for distributed computing infrastructures. CREAM is one of the middleware products part of the EMI middleware distribution: it implements a Grid job management service which allows the submission, management and monitoring of computational jobs to local resource management systems. In this paper we discuss about some new features being implemented in the CREAM Computing Element. The implementation of the EMI Execution Service (EMI-ES) specification (an agreement in the EMI consortium on interfaces and protocols to be used in order to enable computational job submission and management required across technologies) is one of the new functions being implemented. New developments are also focusing in the High Availability (HA) area, to improve performance, scalability, availability and fault tolerance.

  19. Design and Performance Evaluation of an Adaptive Resource Management Framework for Distributed Real-Time and Embedded Systems

    Directory of Open Access Journals (Sweden)

    Chen Yingming

    2008-01-01

    Full Text Available Abstract Achieving end-to-end quality of service (QoS in distributed real-time embedded (DRE systems require QoS support and enforcement from their underlying operating platforms that integrates many real-time capabilities, such as QoS-enabled network protocols, real-time operating system scheduling mechanisms and policies, and real-time middleware services. As standards-based quality of service (QoS enabled component middleware automates integration and configuration activities, it is increasingly being used as a platform for developing open DRE systems that execute in environments where operational conditions, input workload, and resource availability cannot be characterized accurately a priori. Although QoS-enabled component middleware offers many desirable features, however, it historically lacked the ability to allocate resources efficiently and enable the system to adapt to fluctuations in input workload, resource availability, and operating conditions. This paper presents three contributions to research on adaptive resource management for component-based open DRE systems. First, we describe the structure and functionality of the resource allocation and control engine (RACE, which is an open-source adaptive resource management framework built atop standards-based QoS-enabled component middleware. Second, we demonstrate and evaluate the effectiveness of RACE in the context of a representative open DRE system: NASA's magnetospheric multiscale mission system. Third, we present an empirical evaluation of RACE's scalability as the number of nodes and applications in a DRE system grows. Our results show that RACE is a scalable adaptive resource management framework and yields a predictable and high-performance system, even in the face of changing operational conditions and input workload.

  20. Design and Performance Evaluation of an Adaptive Resource Management Framework for Distributed Real-Time and Embedded Systems

    Directory of Open Access Journals (Sweden)

    Chenyang Lu

    2008-04-01

    Full Text Available Achieving end-to-end quality of service (QoS in distributed real-time embedded (DRE systems require QoS support and enforcement from their underlying operating platforms that integrates many real-time capabilities, such as QoS-enabled network protocols, real-time operating system scheduling mechanisms and policies, and real-time middleware services. As standards-based quality of service (QoS enabled component middleware automates integration and configuration activities, it is increasingly being used as a platform for developing open DRE systems that execute in environments where operational conditions, input workload, and resource availability cannot be characterized accurately a priori. Although QoS-enabled component middleware offers many desirable features, however, it historically lacked the ability to allocate resources efficiently and enable the system to adapt to fluctuations in input workload, resource availability, and operating conditions. This paper presents three contributions to research on adaptive resource management for component-based open DRE systems. First, we describe the structure and functionality of the resource allocation and control engine (RACE, which is an open-source adaptive resource management framework built atop standards-based QoS-enabled component middleware. Second, we demonstrate and evaluate the effectiveness of RACE in the context of a representative open DRE system: NASA's magnetospheric multiscale mission system. Third, we present an empirical evaluation of RACE's scalability as the number of nodes and applications in a DRE system grows. Our results show that RACE is a scalable adaptive resource management framework and yields a predictable and high-performance system, even in the face of changing operational conditions and input workload.

  1. Módulo Distribuido de Subasta Java sobre CORBA de Tiempo Real

    Directory of Open Access Journals (Sweden)

    P. Basanta-Val

    2012-10-01

    Full Text Available Resumen: El uso de infraestructuras middleware que permitan intercomunicar diferentes nodos de un sistema puede aliviar aspectos importantes relacionados con el coste de desarrollo y mantenimiento de dichos sistemas en entornos flexibles. En este contexto el artículo presenta una arquitectura desarrollada dentro del contexto de los sistemas de subasta distribuidos que tienen ciertos requisitos de tiempo real que han de ser satisfechos. Dicha arquitectura implementa un sistema de puja doble (CDA para un sistema de tiempo real industriales. El artículo presenta tanto la arquitectura del sistema con sus diferentes actores así como una evaluación en una plataforma real utilizando middleware de tiempo real ya existente. Abstract: Using middleware infrastructures that enable communication among different nodes in a system may alleviate aspects related to management and development cost in flexible environments. In that context the article presents an auction architecture with certain real-time requirements that have to be satisfied. The architecture implements a continuous double action (CDA algorithm for a real-time middleware. The article describes the architecture (actors and goals and the results obtained by using common-off-the-shelf real-time middleware. Palabras clave: Sistema de Subasta, Informática Industrial, Middleware, Tiempo Real, RT-CORBA, Real-time Java, Keywords: Action Systems, Real-time, Middleware, Industrial informatics, RT-CORBA, Real-time Java

  2. Enabling High Data Availability in a DHT

    NARCIS (Netherlands)

    Knezevic, Predrag; Wombacher, Andreas; Risse, Thomas

    Many decentralized and peer-to-peer applications require some sort of data management. Besides P2P file-sharing, there are already scenarios (e.g. BRICKS project) that need management of finer-grained objects including updates and, keeping them highly available in very dynamic communities of peers.

  3. A Middleware Based Approach to Dynamically Deploy Location Based Services onto Heterogeneous Mobile Devices Using Bluetooth in Indoor Environment

    Science.gov (United States)

    Sadhukhan, Pampa; Sen, Rijurekha; Das, Pradip K.

    Several methods for providing location based service (LBS) to mobile devices in indoor environment using wireless technologies like WLAN, RFID and Bluetooth have been proposed, implemented and evaluated. However, most of them do not focus on heterogeneity of mobile platforms, memory constraint of mobile devices, the adaptability of client or device to the new services it discovers whenever it reaches a new location. In this paper, we have proposed a Middleware based approach of LBS provision in the indoor environment, where a Bluetooth enabled Base Station (BS) detects Bluetooth enabled mobile devices and pushes a proper client application only to those devices that belong to some registered subscriber of LBS. This dynamic deployment enables the mobile clients to access any new service without having preinstalled interface to that service beforehand and thus the client's memory consumption is reduced. Our proposed work also addresses the other issues like authenticating the clients before providing them LBSs and introducing paid services. We have evaluated its performance in term of file transfer time with respect to file size and throughput with respect to distance. Experimental results on service consumption time by the mobile client for different services are also presented.

  4. High-level Programming and Symbolic Reasoning on IoT Resource Constrained Devices

    Directory of Open Access Journals (Sweden)

    Sal vatore Gaglio

    2015-05-01

    Full Text Available While the vision of Internet of Things (IoT is rather inspiring, its practical implementation remains challenging. Conventional programming approaches prove unsuitable to provide IoT resource constrained devices with the distributed processing capabilities required to implement intelligent, autonomic, and self-organizing behaviors. In our previous work, we had already proposed an alternative programming methodology for such systems that is characterized by high-level programming and symbolic expressions evaluation, and developed a lightweight middleware to support it. Our approach allows for interactive programming of deployed nodes, and it is based on the simple but e ective paradigm of executable code exchange among nodes. In this paper, we show how our methodology can be used to provide IoT resource constrained devices with reasoning abilities by implementing a Fuzzy Logic symbolic extension on deployed nodes at runtime.

  5. Streamlined sign-out of capillary protein electrophoresis using middleware and an open-source macro application.

    Science.gov (United States)

    Mathur, Gagan; Haugen, Thomas H; Davis, Scott L; Krasowski, Matthew D

    2014-01-01

    Interfacing of clinical laboratory instruments with the laboratory information system (LIS) via "middleware" software is increasingly common. Our clinical laboratory implemented capillary electrophoresis using a Sebia(®) Capillarys-2™ (Norcross, GA, USA) instrument for serum and urine protein electrophoresis. Using Data Innovations Instrument Manager, an interface was established with the LIS (Cerner) that allowed for bi-directional transmission of numeric data. However, the text of the interpretive pathology report was not properly transferred. To reduce manual effort and possibility for error in text data transfer, we developed scripts in AutoHotkey, a free, open-source macro-creation and automation software utility. Scripts were written to create macros that automated mouse and key strokes. The scripts retrieve the specimen accession number, capture user input text, and insert the text interpretation in the correct patient record in the desired format. The scripts accurately and precisely transfer narrative interpretation into the LIS. Combined with bar-code reading by the electrophoresis instrument, the scripts transfer data efficiently to the correct patient record. In addition, the AutoHotKey script automated repetitive key strokes required for manual entry into the LIS, making protein electrophoresis sign-out easier to learn and faster to use by the pathology residents. Scripts allow for either preliminary verification by residents or final sign-out by the attending pathologist. Using the open-source AutoHotKey software, we successfully improved the transfer of text data between capillary electrophoresis software and the LIS. The use of open-source software tools should not be overlooked as tools to improve interfacing of laboratory instruments.

  6. RSAM: An enhanced architecture for achieving web services reliability in mobile cloud computing

    Directory of Open Access Journals (Sweden)

    Amr S. Abdelfattah

    2018-04-01

    Full Text Available The evolution of the mobile landscape is coupled with the ubiquitous nature of the internet with its intermittent wireless connectivity and the web services. Achieving the web service reliability results in low communication overhead and retrieving the appropriate response. The middleware approach (MA is highly tended to achieve the web service reliability. This paper proposes a Reliable Service Architecture using Middleware (RSAM that achieves the reliable web services consumption. The enhanced architecture focuses on ensuring and tracking the request execution under the communication limitations and service temporal unavailability. It considers the most measurement factors including: request size, response size, and consuming time. We conducted experiments to compare the enhanced architecture with the traditional one. In these experiments, we covered several cases to prove the achievement of reliability. Results also show that the request size was found to be constant, the response size is identical to the traditional architecture, and the increase in the consuming time was less than 5% of the transaction time with the different response sizes. Keywords: Reliable web service, Middleware architecture, Mobile cloud computing

  7. High plant availability of phosphorus and low availability of cadmium in four biomass combustion ashes

    DEFF Research Database (Denmark)

    Li, Xiaoxi; Rubæk, Gitte Holton; Sørensen, Peter

    2016-01-01

    and ash significantly increased crop yields and P uptake on the P-depleted soil. In contrast, on the adequate-P soil, the barley yield showed little response to soil amendment, even at 300–500 kg P ha− 1 application, although the barley took up more P at higher applications. The apparent P use efficiency...... of the additive was 20% in ryegrass - much higher than that of barley for which P use efficiencies varied on the two soils. Generally, crop Cd concentrations were little affected by the increasing and high applications of ash, except for relatively high Cd concentrations in barley after applying 25 Mg ha− 1 straw...... ash. Contrarily, even modest increases in the TSP application markedly increased Cd uptake in plants. This might be explained by the low Cd solubility in the ash or by the reduced Cd availability due to the liming effect of ash. High concentrations of resin-extractable P (available P) in the ash...

  8. Open middleware for robotics

    CSIR Research Space (South Africa)

    Namoshe, M

    2008-12-01

    Full Text Available and their technologies within the field of multi-robot systems to ease the difficulty of realizing robot applications. And lastly, an example of algorithm development for multi-robot co-operation using one of the discussed software architecture is presented...

  9. Runtime Testability in Dynamic Highly-Availability Component-based Systems

    NARCIS (Netherlands)

    Gonzalez, A.; Piel, E.; Gross, H.G.; Van Gemund, A.J.C.

    2010-01-01

    Runtime testing is emerging as the solution for the integration and assessment of highly dynamic, high availability software systems where traditional development-time integration testing cannot be performed. A prerequisite for runtime testing is the knowledge about to which extent the system can be

  10. Cluster-based DBMS Management Tool with High-Availability

    Directory of Open Access Journals (Sweden)

    Jae-Woo Chang

    2005-02-01

    Full Text Available A management tool which is needed for monitoring and managing cluster-based DBMSs has been little studied. So, we design and implement a cluster-based DBMS management tool with high-availability that monitors the status of nodes in a cluster system as well as the status of DBMS instances in a node. The tool enables users to recognize a single virtual system image and provides them with the status of all the nodes and resources in the system by using a graphic user interface (GUI. By using a load balancer, our management tool can increase the performance of a cluster-based DBMS as well as can overcome the limitation of the existing parallel DBMSs.

  11. MySQL High Availability Tools for Building Robust Data Centers

    CERN Document Server

    Bell, Charles; Thalmann, Lars

    2010-01-01

    Server bottlenecks and failures are a fact of life in any database deployment, but they don't have to bring everything to a halt. MySQL has several features that can help you protect your system from outages, whether it's running on hardware, virtual machines, or in the cloud. MySQL High Availability explains how to use these replication, cluster, and monitoring features in a wide range of real-life situations. Written by engineers who designed many of the tools covered inside, this book reveals undocumented or hard-to-find aspects of MySQL reliability and high availability -- knowledge that

  12. AVL and Monitoring for Massive Traffic Control System over DDS

    Directory of Open Access Journals (Sweden)

    Basem Almadani

    2015-01-01

    Full Text Available This paper proposes a real-time Automatic Vehicle Location (AVL and monitoring system for traffic control of pilgrims coming towards the city of Makkah in Saudi Arabia based on Data Distribution Service (DDS specified by the Object Management Group (OMG. DDS based middleware employs Real-Time Publish/Subscribe (RTPS protocol that implements many-to-many communication paradigm suitable in massive traffic control applications. Using this middleware approach, we are able to locate and track huge number of mobile vehicles and identify all passengers in real-time who are coming to perform annual Hajj. For validation of our proposed framework, various performance matrices are examined over WLAN using DDS. Results show that DDS based middleware can meet real-time requirements in large-scale AVL environment.

  13. Availability of high school extracurricular sports programs and high-risk behaviors.

    Science.gov (United States)

    Cohen, Deborah A; Taylor, Stephanie L; Zonta, Michela; Vestal, Katherine D; Schuster, Mark A

    2007-02-01

    The Surgeon General has called for an expansion of school-based extracurricular sports programs to address the obesity epidemic. However, little is known about the availability of and participation in high school extracurricular sports and how participation in these sports is related to high-risk behaviors. We surveyed Los Angeles County public high schools in 2002 to determine the number of extracurricular sports programs offered and the percentage of students participating in those programs. We used community data on rates of arrests, births, and sexually transmitted diseases (STDs) among youth to examine associations between risk behaviors and participation in sports programs. The average school offered 14 sports programs, and the average participation rate was 39% for boys and 30% for girls. Smaller schools and schools with higher percentages of disadvantaged students offered fewer programs. The average school offering 13 or fewer programs had 14% of its students participating, while the average school offering 16 or more programs had 31% of its students participating in sports. Controlling for area-level demographics, juvenile arrest rates and teen birth rates, but not STD rates, were lower in areas where schools offered more extracurricular sports. Opportunities for participation in high school extracurricular sports are limited. Future studies should test whether increased opportunities will increase physical activity and impact the increasing overweight problem in youths.

  14. Measurement and simulation of the performance of high energy physics data grids

    Science.gov (United States)

    Crosby, Paul Andrew

    This thesis describes a study of resource brokering in a computational Grid for high energy physics. Such systems are being devised in order to manage the unprecedented workload of the next generation particle physics experiments such as those at the Large Hadron Collider. A simulation of the European Data Grid has been constructed, and calibrated using logging data from a real Grid testbed. This model is then used to explore the Grid's middleware configuration, and suggest improvements to its scheduling policy. The expansion of the simulation to include data analysis of the type conducted by particle physicists is then described. A variety of job and data management policies are explored, in order to determine how well they meet the needs of physicists, as well as how efficiently they make use of CPU and network resources. Appropriate performance indicators are introduced in order to measure how well jobs and resources are managed from different perspectives. The effects of inefficiencies in Grid middleware are explored, as are methods of compensating for them. It is demonstrated that a scheduling algorithm should alter its weighting on load balancing and data distribution, depending on whether data transfer or CPU requirements dominate, and also on the level of job loading. It is also shown that an economic model for data management and replication can improve the efficiency of network use and job processing.

  15. Configurable e-commerce-oriented distributed seckill system with high availability

    Science.gov (United States)

    Zhu, Liye

    2018-04-01

    The rapid development of e-commerce prompted the birth of seckill activity. Seckill activity greatly stimulated public shopping desire because of its significant attraction to customers. In a seckill activity, a limited number of products will be sold at varying degrees of discount, which brings a huge temptation for customers. The discounted products are usually sold out in seconds, which can be a huge challenge for e-commerce systems. In this case, a seckill system with high concurrency and high availability has very practical significance. This research cooperates with Huijin Department Store to design and implement a seckill system of e-commerce platform. The seckill system supports high concurrency network conditions and is highly available in unexpected situation. In addition, due to the short life cycle of seckill activity, the system has the flexibility to be configured and scalable, which means that it is able to add or re-move system resources on demand. Finally, this paper carried out the function test and the performance test of the whole system. The test results show that the system meets the functional requirements and performance requirements of suppliers, administrators as well as users.

  16. Condom availability in high risk places and condom use

    DEFF Research Database (Denmark)

    Sandøy, Ingvild Fossgard; Blystad, Astrid; Shayo, Elizabeth H.

    2012-01-01

    study findings indicate that substantial further efforts should be made to secure that condoms are easily accessible in places where sexual relationships are initiated. Although condom distribution in drinking places has been pinpointed in the HIV/AIDS prevention strategies of all the three countries......Background A number of studies from countries with severe HIV epidemics have found gaps in condom availability, even in places where there is a substantial potential for HIV transmission. Although reported condom use has increased in many African countries, there are often big differences...... in the availability of condoms in places where people meet new sexual partners in these three African districts. Considering that previous studies have found that improved condom availability and accessibility in high risk places have a potential to increase condom use among people with multiple partners, the present...

  17. Progress in Developing a High-Availability Advanced Tokamak Pilot Plant

    Energy Technology Data Exchange (ETDEWEB)

    Brown, T.; Goldston, R.; Kessel, C.; Neilson, G.; Menard, J.; Prager, S.; Scott, S.; Titus, P.; Zarnstorff, M., E-mail: tbrown@pppl.gov [Princeton University, Princeton Plasma Physics Laboratory, Princeton (United States); Costley, A. [Henley on Thames (United Kingdom); El-Guebaly, L. [University of Wisconsin, Madison (United States); Malang, S. [Fusion Nuclear Technology Consulting, Linkenheim (Germany); Waganer, L. [St. Louis (United States)

    2012-09-15

    Full text: A fusion pilot plant study was initiated to clarify the development needs in moving from ITER to a first of a kind fusion power plant, following a path similar to the approach adopted for the commercialization of fission. The mission of the pilot plant was set to encompass component test and fusion nuclear science missions yet produce net electricity with high availability in a device designed to be prototypical of the commercial device. The objective of the study was to evaluate three different magnetic configuration options, the advanced tokamak (AT), spherical tokamak (ST) and compact stellarator (CS) in an effort to establish component characteristics, maintenance features and the general arrangement of each candidate device. With the move to look beyond ITER the fusion community is now beginning to embark on DEMO reactor studies with an emphasis on defining configuration arrangements that can meet a high availability goal. In this paper the AT pilot plant design will be presented. The selected maintenance approach, the device arrangement and sizing of the in-vessel components and details of interfacing auxiliary systems and services that impact the ability to achieve high availability operations will be discussed. Efforts made to enhance the interaction of in-vessel maintenance activities, the hot cell and the transfer process to develop simplifying solutions will also be addressed. (author)

  18. Grid Technology as a Cyberinfrastructure for Delivering High-End Services to the Earth and Space Science Community

    Science.gov (United States)

    Hinke, Thomas H.

    2004-01-01

    Grid technology consists of middleware that permits distributed computations, data and sensors to be seamlessly integrated into a secure, single-sign-on processing environment. In &is environment, a user has to identify and authenticate himself once to the grid middleware, and then can utilize any of the distributed resources to which he has been,panted access. Grid technology allows resources that exist in enterprises that are under different administrative control to be securely integrated into a single processing environment The grid community has adopted commercial web services technology as a means for implementing persistent, re-usable grid services that sit on top of the basic distributed processing environment that grids provide. These grid services can then form building blocks for even more complex grid services. Each grid service is characterized using the Web Service Description Language, which provides a description of the interface and how other applications can access it. The emerging Semantic grid work seeks to associates sufficient semantic information with each grid service such that applications wii1 he able to automatically select, compose and if necessary substitute available equivalent services in order to assemble collections of services that are most appropriate for a particular application. Grid technology has been used to provide limited support to various Earth and space science applications. Looking to the future, this emerging grid service technology can provide a cyberinfrastructures for both the Earth and space science communities. Groups within these communities could transform those applications that have community-wide applicability into persistent grid services that are made widely available to their respective communities. In concert with grid-enabled data archives, users could easily create complex workflows that extract desired data from one or more archives and process it though an appropriate set of widely distributed grid

  19. A Runtime Testability Metric for Dynamic High-Availability Component-based Systems

    NARCIS (Netherlands)

    Gonzales-Sanchez, A.; Piel, E.A.B.; Gross, H.G.; Van Gemund, A.J.C.

    2011-01-01

    Runtime testing is emerging as the solution for the integration and assessment of highly dynamic, high availability software systems where traditional development-time integration testing cannot be performed. A prerequisite for runtime testing is the knowledge about to which extent the system can be

  20. A High-Availability, Distributed Hardware Control System Using Java

    Science.gov (United States)

    Niessner, Albert F.

    2011-01-01

    Two independent coronagraph experiments that require 24/7 availability with different optical layouts and different motion control requirements are commanded and controlled with the same Java software system executing on many geographically scattered computer systems interconnected via TCP/IP. High availability of a distributed system requires that the computers have a robust communication messaging system making the mix of TCP/IP (a robust transport), and XML (a robust message) a natural choice. XML also adds the configuration flexibility. Java then adds object-oriented paradigms, exception handling, heavily tested libraries, and many third party tools for implementation robustness. The result is a software system that provides users 24/7 access to two diverse experiments with XML files defining the differences

  1. Network Information Management: The Key To Providing High WAN Availability.

    Science.gov (United States)

    Tysdal, Craig

    1996-01-01

    Discusses problems associated with increasing corporate network complexity as a result of the proliferation of client/server applications at remote locations, and suggests the key to providing high WAN (wide area network) availability is relational databases used in an integrated management approach. (LRW)

  2. Autonomic Wireless Sensor Networks: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Jesús M. T. Portocarrero

    2014-01-01

    Full Text Available Autonomic computing (AC is a promising approach to meet basic requirements in the design of wireless sensor networks (WSNs, and its principles can be applied to efficiently manage nodes operation and optimize network resources. Middleware for WSNs supports the implementation and basic operation of such networks. In this systematic literature review (SLR we aim to provide an overview of existing WSN middleware systems that address autonomic properties. The main goal is to identify which development approaches of AC are used for designing WSN middleware system, which allow the self-management of WSN. Another goal is finding out which interactions and behavior can be automated in WSN components. We drew the following main conclusions from the SLR results: (i the selected studies address WSN concerns according to the self-* properties of AC, namely, self-configuration, self-healing, self-optimization, and self-protection; (ii the selected studies use different approaches for managing the dynamic behavior of middleware systems for WSN, such as policy-based reasoning, context-based reasoning, feedback control loops, mobile agents, model transformations, and code generation. Finally, we identified a lack of comprehensive system architecture designs that support the autonomy of sensor networking.

  3. Exploiting peer group concept for adaptive and highly available services

    CERN Document Server

    Jan, M A; Fraz, M M; Ali, A; Ali, Arshad; Fraz, Mohammad Moazam; Jan, Muhammad Asif; Zahid, Fahd Ali

    2003-01-01

    This paper presents a prototype for redundant, highly available and fault tolerant peer to peer framework for data management. Peer to peer computing is gaining importance due to its flexible organization, lack of central authority, distribution of functionality to participating nodes and ability to utilize unused computational resources. Emergence of GRID computing has provided much needed infrastructure and administrative domain for peer to peer computing. The components of this framework exploit peer group concept to scope service and information search, arrange services and information in a coherent manner, provide selective redundancy and ensure availability in face of failure and high load conditions. A prototype system has been implemented using JXTA peer to peer technology and XML is used for service description and interfaces, allowing peers to communicate with services implemented in various platforms including web services and JINI services. It utilizes code mobility to achieve role interchange amo...

  4. Availability of Automated External Defibrillators in Public High Schools.

    Science.gov (United States)

    White, Michelle J; Loccoh, Emefah C; Goble, Monica M; Yu, Sunkyung; Duquette, Deb; Davis, Matthew M; Odetola, Folafoluwa O; Russell, Mark W

    2016-05-01

    To assess automated external defibrillator (AED) distribution and cardiac emergency preparedness in Michigan secondary schools and investigate for association with school sociodemographic characteristics. Surveys were sent via electronic mail to representatives from all public high schools in 30 randomly selected Michigan counties, stratified by population. Association of AED-related factors with school sociodemographic characteristics were evaluated using Wilcoxon rank sum test and χ(2) test, as appropriate. Of 188 schools, 133 (71%) responded to the survey and all had AEDs. Larger student population was associated with fewer AEDs per 100 students (P schools. Schools with >20% students from racial minority groups had significantly fewer AEDs available per 100 students than schools with less racial diversity (P = .03). Schools with more students eligible for free and reduced lunch were less likely to have a cardiac emergency response plan (P = .02) and demonstrated less frequent AED maintenance (P = .03). Although AEDs are available at public high schools across Michigan, the number of AEDs per student varies inversely with minority student population and school size. Unequal distribution of AEDs and lack of cardiac emergency preparedness may contribute to outcomes of sudden cardiac arrest among youth. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. mdtmFTP

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, L. [Fermilab; Wu, Wenji [Fermilab; DeMar, P. [Fermilab; Katramatos, D. [BNL; Yu, D. [BNL

    2015-01-01

    To address the high-performance challenges of data transfer in big data era, we research, develop, and implement mdtmFTP: a High-performance Data Transfer Tool in Bigdata Era. DOE’s Advanced Scientific Computing Research (ASCR) office has funded Fermilab and Brookhaven National Laboratory to collaboratively work on the Multicore-Aware Data Transfer Middleware (MDTM) project. MDTM aims to accelerate data movement toolkits on multicore systems. mdtmFTP is the latest outcome of this continued research effort. mdtmFTP is a high-performance data transfer tool that builds upon the MDTM middleware. Initial tests show that mdtmFTP performs better than existing data transfer tools

  6. Instrumentation Standard Architectures for Future High Availability Control Systems

    International Nuclear Information System (INIS)

    Larsen, R.S.

    2005-01-01

    Architectures for next-generation modular instrumentation standards should aim to meet a requirement of High Availability, or robustness against system failure. This is particularly important for experiments both large and small mounted on production accelerators and light sources. New standards should be based on architectures that (1) are modular in both hardware and software for ease in repair and upgrade; (2) include inherent redundancy at internal module, module assembly and system levels; (3) include modern high speed serial inter-module communications with robust noise-immune protocols; and (4) include highly intelligent diagnostics and board-management subsystems that can predict impending failure and invoke evasive strategies. The simple design principles lead to fail-soft systems that can be applied to any type of electronics system, from modular instruments to large power supplies to pulsed power modulators to entire accelerator systems. The existing standards in use are briefly reviewed and compared against a new commercial standard which suggests a powerful model for future laboratory standard developments. The past successes of undertaking such projects through inter-laboratory engineering-physics collaborations will be briefly summarized

  7. An Optimized, Data Distribution Service-Based Solution for Reliable Data Exchange Among Autonomous Underwater Vehicles.

    Science.gov (United States)

    Rodríguez-Molina, Jesús; Bilbao, Sonia; Martínez, Belén; Frasheri, Mirgita; Cürüklü, Baran

    2017-08-05

    Major challenges are presented when managing a large number of heterogeneous vehicles that have to communicate underwater in order to complete a global mission in a cooperative manner. In this kind of application domain, sending data through the environment presents issues that surpass the ones found in other overwater, distributed, cyber-physical systems (i.e., low bandwidth, unreliable transport medium, data representation and hardware high heterogeneity). This manuscript presents a Publish/Subscribe-based semantic middleware solution for unreliable scenarios and vehicle interoperability across cooperative and heterogeneous autonomous vehicles. The middleware relies on different iterations of the Data Distribution Service (DDS) software standard and their combined work between autonomous maritime vehicles and a control entity. It also uses several components with different functionalities deemed as mandatory for a semantic middleware architecture oriented to maritime operations (device and service registration, context awareness, access to the application layer) where other technologies are also interweaved with middleware (wireless communications, acoustic networks). Implementation details and test results, both in a laboratory and a deployment scenario, have been provided as a way to assess the quality of the system and its satisfactory performance.

  8. bHROS: A New High-Resolution Spectrograph Available on Gemini South

    Science.gov (United States)

    Margheim, S. J.; Gemini bHROS Team

    2005-12-01

    The Gemini bench-mounted High-Resolution Spectrograph (bHROS) is available for science programs beginning in 2006A. bHROS is the highest resolution (R=150,000) optical echelle spectrograph optimized for use on an 8-meter telescope. bHROS is fiber-fed via GMOS-S from the Gemini South focal plane and is available in both a dual-fiber Object/Sky mode and a single (larger) Object-only mode. Instrument characteristics and sample data taken during commissioning will be presented.

  9. Framework for Computation Offloading in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dejan Kovachev

    2012-12-01

    Full Text Available The inherently limited processing power and battery lifetime of mobile phones hinder the possible execution of computationally intensive applications like content-based video analysis or 3D modeling. Offloading of computationally intensive application parts from the mobile platform into a remote cloud infrastructure or nearby idle computers addresses this problem. This paper presents our Mobile Augmentation Cloud Services (MACS middleware which enables adaptive extension of Android application execution from a mobile client into the cloud. Applications are developed by using the standard Android development pattern. The middleware does the heavy lifting of adaptive application partitioning, resource monitoring and computation offloading. These elastic mobile applications can run as usual mobile application, but they can also use remote computing resources transparently. Two prototype applications using the MACS middleware demonstrate the benefits of the approach. The evaluation shows that applications, which involve costly computations, can benefit from offloading with around 95% energy savings and significant performance gains compared to local execution only.

  10. Research on high availability architecture of SQL and NoSQL

    Science.gov (United States)

    Wang, Zhiguo; Wei, Zhiqiang; Liu, Hao

    2017-03-01

    With the advent of the era of big data, amount and importance of data have increased dramatically. SQL database develops in performance and scalability, but more and more companies tend to use NoSQL database as their databases, because NoSQL database has simpler data model and stronger extension capacity than SQL database. Almost all database designers including SQL database and NoSQL database aim to improve performance and ensure availability by reasonable architecture which can reduce the effects of software failures and hardware failures, so that they can provide better experiences for their customers. In this paper, I mainly discuss the architectures of MySQL, MongoDB, and Redis, which are high available and have been deployed in practical application environment, and design a hybrid architecture.

  11. EMI Quality Assurance Processes (PS06-4-499)

    International Nuclear Information System (INIS)

    Aimar, A; Alandes-Pradillo, M; Dini, L; Cernak, J; Dongiovanni, D; Kenny, E

    2011-01-01

    The European Middleware Initiative (EMI) is the collaboration of the major European middleware providers, ARC, gLite, UNICORE, and dCache. It aims to deliver a consolidated set of middleware components for deployment in EGI and PRACE, extend the interoperability and integration between grids and other computing infrastructures, strengthen the reliability and manageability of the services and establish a sustainable model to support, harmonise and evolve the middleware, ensuring it responds to the requirements of the scientific communities relying on it. EMI will carry out the collective task of supporting and maintaining the middleware for their user communities. In order to enable the infrastructures to achieve this task, the middleware services must play an important role and mark a clear transition to more sustainable models by adopting best-practice service provision methods such as the ITIL processes or the ISO guidelines for software quality and validation. Repositories of packages, status reports, quality metrics and test and compliance programs are created and maintained to support the project software engineering activities and other providers of applications and services based on the EMI middleware. This article reports on the initial work of the EMI project and the solutions adopted for the software releases, development processes, quality compliance metrics and distribution repositories.

  12. The GLEaMviz computational tool, a publicly available software to explore realistic epidemic spreading scenarios at the global scale

    Directory of Open Access Journals (Sweden)

    Quaggiotto Marco

    2011-02-01

    Full Text Available Abstract Background Computational models play an increasingly important role in the assessment and control of public health crises, as demonstrated during the 2009 H1N1 influenza pandemic. Much research has been done in recent years in the development of sophisticated data-driven models for realistic computer-based simulations of infectious disease spreading. However, only a few computational tools are presently available for assessing scenarios, predicting epidemic evolutions, and managing health emergencies that can benefit a broad audience of users including policy makers and health institutions. Results We present "GLEaMviz", a publicly available software system that simulates the spread of emerging human-to-human infectious diseases across the world. The GLEaMviz tool comprises three components: the client application, the proxy middleware, and the simulation engine. The latter two components constitute the GLEaMviz server. The simulation engine leverages on the Global Epidemic and Mobility (GLEaM framework, a stochastic computational scheme that integrates worldwide high-resolution demographic and mobility data to simulate disease spread on the global scale. The GLEaMviz design aims at maximizing flexibility in defining the disease compartmental model and configuring the simulation scenario; it allows the user to set a variety of parameters including: compartment-specific features, transition values, and environmental effects. The output is a dynamic map and a corresponding set of charts that quantitatively describe the geo-temporal evolution of the disease. The software is designed as a client-server system. The multi-platform client, which can be installed on the user's local machine, is used to set up simulations that will be executed on the server, thus avoiding specific requirements for large computational capabilities on the user side. Conclusions The user-friendly graphical interface of the GLEaMviz tool, along with its high level

  13. LHCbDIRAC as Apache Mesos microservices

    OpenAIRE

    Haen, Christophe; Couturier, Benjamin

    2017-01-01

    The LHCb experiment relies on LHCbDIRAC, an extension of DIRAC, to drive its offline computing. This middleware provides a development framework and a complete set of components for building distributed computing systems. These components are currently installed and run on virtual machines (VM) or bare metal hardware. Due to the increased workload, high availability is becoming more and more important for the LHCbDIRAC services, and the current installation model is showing its limitations. A...

  14. The Einstein Genome Gateway using WASP - a high throughput multi-layered life sciences portal for XSEDE.

    Science.gov (United States)

    Golden, Aaron; McLellan, Andrew S; Dubin, Robert A; Jing, Qiang; O Broin, Pilib; Moskowitz, David; Zhang, Zhengdong; Suzuki, Masako; Hargitai, Joseph; Calder, R Brent; Greally, John M

    2012-01-01

    Massively-parallel sequencing (MPS) technologies and their diverse applications in genomics and epigenomics research have yielded enormous new insights into the physiology and pathophysiology of the human genome. The biggest hurdle remains the magnitude and diversity of the datasets generated, compromising our ability to manage, organize, process and ultimately analyse data. The Wiki-based Automated Sequence Processor (WASP), developed at the Albert Einstein College of Medicine (hereafter Einstein), uniquely manages to tightly couple the sequencing platform, the sequencing assay, sample metadata and the automated workflows deployed on a heterogeneous high performance computing cluster infrastructure that yield sequenced, quality-controlled and 'mapped' sequence data, all within the one operating environment accessible by a web-based GUI interface. WASP at Einstein processes 4-6 TB of data per week and since its production cycle commenced it has processed ~ 1 PB of data overall and has revolutionized user interactivity with these new genomic technologies, who remain blissfully unaware of the data storage, management and most importantly processing services they request. The abstraction of such computational complexity for the user in effect makes WASP an ideal middleware solution, and an appropriate basis for the development of a grid-enabled resource - the Einstein Genome Gateway - as part of the Extreme Science and Engineering Discovery Environment (XSEDE) program. In this paper we discuss the existing WASP system, its proposed middleware role, and its planned interaction with XSEDE to form the Einstein Genome Gateway.

  15. High School Physics Availability: Results from the 2012-13 Nationwide Survey of High School Physics Teachers. Focus On

    Science.gov (United States)

    White, Susan; Tesfaye, Casey Langer

    2014-01-01

    In this report, the authors share their analysis of the data from over 3,500 high schools in the U.S. beginning with an examination of the availability of physics in U.S. high schools. The schools in their sample are a nationally-representative random sample of the almost 25,000 high schools in forty-nine of the fifty states. Table 1 shows the…

  16. High Available COTS Based Computer for Space

    Science.gov (United States)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  17. Building highly available control system applications with Advanced Telecom Computing Architecture and open standards

    International Nuclear Information System (INIS)

    Kazakov, Artem; Furukawa, Kazuro

    2010-01-01

    Requirements for modern and future control systems for large projects like International Linear Collider demand high availability for control system components. Recently telecom industry came up with a great open hardware specification - Advanced Telecom Computing Architecture (ATCA). This specification is aimed for better reliability, availability and serviceability. Since its first market appearance in 2004, ATCA platform has shown tremendous growth and proved to be stable and well represented by a number of vendors. ATCA is an industry standard for highly available systems. On the other hand Service Availability Forum, a consortium of leading communications and computing companies, describes interaction between hardware and software. SAF defines a set of specifications such as Hardware Platform Interface, Application Interface Specification. SAF specifications provide extensive description of highly available systems, services and their interfaces. Originally aimed for telecom applications, these specifications can be used for accelerator controls software as well. This study describes benefits of using these specifications and their possible adoption to accelerator control systems. It is demonstrated how EPICS Redundant IOC was extended using Hardware Platform Interface specification, which made it possible to utilize benefits of the ATCA platform.

  18. To Vote Before Decide: A Logless One-Phase Commit Protocol for Highly-Available Datastores

    OpenAIRE

    Zhu, Yuqing; Yu, Philip S.; Yi, Guolei; Ma, Wenlong; Guo, Mengying; Liu, Jianxun

    2017-01-01

    Highly-available datastores are widely deployed for online applications. However, many online applications are not contented with the simple data access interface currently provided by highly-available datastores. Distributed transaction support is demanded by applications such as large-scale online payment used by Alipay or Paypal. Current solutions to distributed transaction can spend more than half of the whole transaction processing time in distributed commit. An efficient atomic commit p...

  19. Implementasi Cloud Storage Menggunakan OwnCloud yang High-Availability

    Directory of Open Access Journals (Sweden)

    Ikhwan Ar-Razy

    2016-04-01

    Full Text Available Implementation of practicum courses in Department of Computer Engineering Diponegoro University has some drawbacks, one of them is a lot of lab assistant and the practitioner experiencing difficulties in terms of archiving. One solution to solve the problem is implementing a shared file storage system that is easy and can be accessed by both practitioners or lab assistants. The purpose of this research is to build a cloud-based storage systems that are reliable to preventing crash damage hardware and high availability. The purpose of this research is achieved by designing the appropriate methodology. The result of this research is a storage system that is on the server side by using virtualization and data replication (DRBD as a storage method. The system is composed of two physical servers and one virtual server. Physical servers are using Proxmox VE as operating system and virtual server is using Ubuntu Server as operating system. OwnCloud applications and files are stored in the virtual server. File storage system has several major functions, which are: upload, download, user management, remove, and restore. The functions are executed through web pages, desktop application and Android application.

  20. The availability of novelty sweets within high school localities.

    Science.gov (United States)

    Aljawad, A; Morgan, M Z; Rees, J S; Fairchild, R

    2016-06-10

    Background Reducing sugar consumption is a primary focus of current global public health policy. Achieving 5% of total energy from free sugars will be difficult acknowledging the concentration of free sugars in sugar sweetened beverages, confectionery and as hidden sugars in many savoury items. The expansion of the novelty sweet market in the UK has significant implications for children and young adults as they contribute to dental caries, dental erosion and obesity.Objective To identify the most available types of novelty sweets within the high school fringe in Cardiff, UK and to assess their price range and where and how they were displayed in shops.Subjects and methods Shops within a ten minute walking distance around five purposively selected high schools in the Cardiff aea representing different levels of deprivation were visited. Shops in Cardiff city centre and three supermarkets were also visited to identify the most commonly available novelty sweets.Results The ten most popular novelty sweets identified in these scoping visits were (in descending order): Brain Licker, Push Pop, Juicy Drop, Lickedy Lips, Big Baby Pop, Vimto candy spray, Toxic Waste, Tango candy spray, Brain Blasterz Bitz and Mega Mouth candy spray. Novelty sweets were located on low shelves which were accessible to all age-groups in 73% (14 out of 19) of the shops. Novelty sweets were displayed in the checkout area in 37% (seven out of 19) shops. The price of the top ten novelty sweets ranged from 39p to £1.Conclusion A wide range of acidic and sugary novelty sweets were easily accessible and priced within pocket money range. Those personnel involved in delivering dental and wider health education or health promotion need to be aware of recent developments in children's confectionery. The potential effects of these novelty sweets on both general and dental health require further investigation.

  1. High availability based on production information systems research and practice

    International Nuclear Information System (INIS)

    Lu Weiping

    2010-01-01

    Through the presentation of the production information system application deployment in Qinshan Nuclear Power Co., Ltd., combined with CEAS failure to deal with, respectively, in the server (operating system), database and application software on-going monitoring and tuning of the actual cases are discussed: For the system to maintain a high availability, performance and security, not only on the server (operating system) for rational allocation and deployment, but also the need for database and application software to optimize the well-designed and sustained. (authors)

  2. An Electronic Healthcare Record Server Implemented in PostgreSQL

    Directory of Open Access Journals (Sweden)

    Tony Austin

    2015-01-01

    Full Text Available This paper describes the implementation of an Electronic Healthcare Record server inside a PostgreSQL relational database without dependency on any further middleware infrastructure. The five-part international standard for communicating healthcare records (ISO EN 13606 is used as the information basis for the design of the server. We describe some of the features that this standard demands that are provided by the server, and other areas where assumptions about the durability of communications or the presence of middleware lead to a poor fit. Finally, we discuss the use of the server in two real-world scenarios including a commercial application.

  3. The mobile phone in Africa: Providing services to the masses

    CSIR Research Space (South Africa)

    Botha, Adèle

    2010-08-31

    Full Text Available and operational considerations associated with creating a middleware platform for mobile services. The platform should be able to support different mobile paradigms (voice, text, multimedia, mobile web, applications) using a variety of communications protocols...

  4. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  5. Main Directions in Ensuring Business Continuity for Information and Telecommunication Systems of High Availability

    Directory of Open Access Journals (Sweden)

    Boris Mikhailovich Egorov

    2014-12-01

    Full Text Available Enterprise Continuity Program, Information and Telecommunication System of Bank of Russia The results of the analysis of the main directions to ensure business continuity for the information and telecommunication systems of high availability in relation to expansion of the problems range, their intellectualization and the modern IT implementation are given.

  6. High-resolution projections of surface water availability for Tasmania, Australia

    Directory of Open Access Journals (Sweden)

    J. C. Bennett

    2012-05-01

    Full Text Available Changes to streamflows caused by climate change may have major impacts on the management of water for hydro-electricity generation and agriculture in Tasmania, Australia. We describe changes to Tasmanian surface water availability from 1961–1990 to 2070–2099 using high-resolution simulations. Six fine-scale (∼10 km2 simulations of daily rainfall and potential evapotranspiration are generated with the CSIRO Conformal Cubic Atmospheric Model (CCAM, a variable-resolution regional climate model (RCM. These variables are bias-corrected with quantile mapping and used as direct inputs to the hydrological models AWBM, IHACRES, Sacramento, SIMHYD and SMAR-G to project streamflows.

    The performance of the hydrological models is assessed against 86 streamflow gauges across Tasmania. The SIMHYD model is the least biased (median bias = −3% while IHACRES has the largest bias (median bias = −22%. We find the hydrological models that best simulate observed streamflows produce similar streamflow projections.

    There is much greater variation in projections between RCM simulations than between hydrological models. Marked decreases of up to 30% are projected for annual runoff in central Tasmania, while runoff is generally projected to increase in the east. Daily streamflow variability is projected to increase for most of Tasmania, consistent with increases in rainfall intensity. Inter-annual variability of streamflows is projected to increase across most of Tasmania.

    This is the first major Australian study to use high-resolution bias-corrected rainfall and potential evapotranspiration projections as direct inputs to hydrological models. Our study shows that these simulations are capable of producing realistic streamflows, allowing for increased confidence in assessing future changes to surface water variability.

  7. High refuge availability on coral reefs increases the vulnerability of reef-associated predators to overexploitation.

    Science.gov (United States)

    Rogers, Alice; Blanchard, Julia L; Newman, Steven P; Dryden, Charlie S; Mumby, Peter J

    2018-02-01

    Refuge availability and fishing alter predator-prey interactions on coral reefs, but our understanding of how they interact to drive food web dynamics, community structure and vulnerability of different trophic groups is unclear. Here, we apply a size-based ecosystem model of coral reefs, parameterized with empirical measures of structural complexity, to predict fish biomass, productivity and community structure in reef ecosystems under a broad range of refuge availability and fishing regimes. In unfished ecosystems, the expected positive correlation between reef structural complexity and biomass emerges, but a non-linear effect of predation refuges is observed for the productivity of predatory fish. Reefs with intermediate complexity have the highest predator productivity, but when refuge availability is high and prey are less available, predator growth rates decrease, with significant implications for fisheries. Specifically, as fishing intensity increases, predators in habitats with high refuge availability exhibit vulnerability to over-exploitation, resulting in communities dominated by herbivores. Our study reveals mechanisms for threshold dynamics in predators living in complex habitats and elucidates how predators can be food-limited when most of their prey are able to hide. We also highlight the importance of nutrient recycling via the detrital pathway, to support high predator biomasses on coral reefs. © 2018 by the Ecological Society of America.

  8. Towards a comfortable, energy-efficient office using a publish-subscribe pattern in an internet of things environment

    CSIR Research Space (South Africa)

    Butgereit, LL

    2014-09-01

    Full Text Available an implementation of the pub-sub pattern specifically for an Internet of Things platform which operated at four levels –sensors (and actuator), Supervisors, Middleware, and application. This platform was specifically instantiated to control a typical office meeting...

  9. CMS experience of running glideinWMS in High Availability mode

    CERN Document Server

    Sfiligoi, Igor; Belforte, Stefano; Mc Crea, Alison Jean; Larson, Krista Elaine; Zvada, Marian; Holzman, Burt; P Mhashilkar; Bradley, Daniel Charles; Saiz Santos, Maria Dolores; Fanzago, Federica; Gutsche, Oliver; Martin, Terrence; Wuerthwein, Frank Karl

    2013-01-01

    The CMS experiment at the Large Hadron Collider is relying on the HTCondor-based glideinWMS batch system to handle most of its distributed computing needs. In order to minimize the risk of disruptions due to software and hardware problems, and also to simplify the maintenance procedures, CMS has set up its glideinWMS instance to use most of the attainable High Availability (HA) features. The setup involves running services distributed over multiple nodes, which in turn are located in several physical locations, including Geneva, Switzerland, Chicago, Illinois and San Diego, California. This paper describes the setup used by CMS, the HA limits of this setup, as well as a description of the actual operational experience spanning many months.

  10. Availability and Price of High Quality Day Care and Female Employment

    DEFF Research Database (Denmark)

    Simonsen, Marianne

    In this paper I analyse to what degree availability and price of high quality publicly subsidised childcare affects female employment for women living in couples following maternity leave. The results show that unrestricted access to day care has a significantly positive effct on female employment.......The price effect is significantly negative: An increase in the price of child care of C=1 will decrease the female employment with 0.08% corresponding to a price elasticity of −0.17. This effect prevails during the first 12 months after childbirth....

  11. EMI datalib - joining the best of ARC and gLite data libraries

    International Nuclear Information System (INIS)

    Nilsen, J K; Cameron, D; Devresse, A; Molnar, Zs; Salichos, M; Nagy, Zs

    2012-01-01

    To manage data in the grid, with its jungle of protocols and enormous amount of data in different storage solutions, it is important to have a strong, versatile and reliable data management library. While there are several data management tools and libraries available, they all have different strengths and weaknesses, and it can be hard to decide which tool to use for which purpose. EMI is a collaboration between the European middleware providers aiming to take the best out of each middleware to create one consolidated, all-purpose grid middleware. When EMI started there were two main tools for managing data - gLite had lcg util and the GFAL library, ARC had the ARC data tools and libarcdata2. While different in design and purpose, they both have the same goal; to manage data in the grid. The design of the new EMI datalib was ready by the end of 2011, and a first prototype is now implemented and going through a thorough testing phase. This presentation will give the latest results of the consolidated library together with an overview of the design, test plan and roadmap of EMI datalib.

  12. 76 FR 14025 - Guidance for Industry on Planning for the Effects of High Absenteeism To Ensure Availability of...

    Science.gov (United States)

    2011-03-15

    ...] Guidance for Industry on Planning for the Effects of High Absenteeism To Ensure Availability of Medically... entitled ``Planning for the Effects of High Absenteeism to Ensure Availability of Medically Necessary Drug... components to develop production plans in the event of an emergency that results in high absenteeism at one...

  13. Availability of high-magnitude streamflow for groundwater banking in the Central Valley, California

    Science.gov (United States)

    Kocis, Tiffany N.; Dahlke, Helen E.

    2017-08-01

    California’s climate is characterized by the largest precipitation and streamflow variability observed within the conterminous US This, combined with chronic groundwater overdraft of 0.6-3.5 km3 yr-1, creates the need to identify additional surface water sources available for groundwater recharge using methods such as agricultural groundwater banking, aquifer storage and recovery, and spreading basins. High-magnitude streamflow, i.e. flow above the 90th percentile, that exceeds environmental flow requirements and current surface water allocations under California water rights, could be a viable source of surface water for groundwater banking. Here, we present a comprehensive analysis of the magnitude, frequency, duration and timing of high-magnitude streamflow (HMF) for 93 stream gauges covering the Sacramento, San Joaquin and Tulare basins in California. The results show that in an average year with HMF approximately 3.2 km3 of high-magnitude flow is exported from the entire Central Valley to the Sacramento-San Joaquin Delta often at times when environmental flow requirements of the Delta and major rivers are exceeded. High-magnitude flow occurs, on average, during 7 and 4.7 out of 10 years in the Sacramento River and the San Joaquin-Tulare Basins, respectively, from just a few storm events (5-7 1-day peak events) lasting for 25-30 days between November and April. The results suggest that there is sufficient unmanaged surface water physically available to mitigate long-term groundwater overdraft in the Central Valley.

  14. Operational Forest Monitoring in Siberia Using Multi-source Earth Observation Data

    Directory of Open Access Journals (Sweden)

    C. Hüttich

    2014-08-01

    Full Text Available Forest cover disturbance rates are increasing in the forests of Siberia due to intensification of human activities and climate change. In this paper two satellite data sources were used for automated forest cover change detection. Annual ALOS PALSAR backscatter mosaics (2007–2010 were used for yearly forest loss monitoring. Time series of the Enhanced Vegetation Index (EVI, 2000–2014 from the Moderate Resolution Imaging Spectroradiometer (MODIS were integrated in a web-based data middleware system to assess the capabilities of a near-real time detection of forest disturbances using the break point detection by additive season and trends (Bfast method. The SAR-based average accuracy of the forest loss detection was 70 %, whereas the MODIS-based change assessment using breakpoint detection achieved average accuracies of 50 % for trend-based breakpoints and 43.4 % for season-based breakpoints. It was demonstrated that SAR remote sensing is a highly accurate tool for up-to-date forest monitoring. Web-based data middleware systems like the Earth Observation Monitor, linked with MODIS time series, provide access and easy-to-use tools for on demand change monitoring in remote Siberian forests.

  15. The D3 Middleware Architecture

    Science.gov (United States)

    Walton, Joan; Filman, Robert E.; Korsmeyer, David J.; Lee, Diana D.; Mak, Ron; Patel, Tarang

    2002-01-01

    DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid-dynamics) model executions. DARWIN captures, stores and indexes data; manages derived knowledge (such as visualizations across multiple datasets); and provides an environment for designers to collaborate in the analysis of test results. DARWIN is an interesting application because it supports high-volumes of data. integrates multiple modalities of data display (e.g., images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and views of data. Here we provide an overview of the architecture of D3, the third generation of DARWIN. Earlier versions of DARWIN were characterized by browser-based interfaces and a hodge-podge of server technologies: CGI scripts, applets, PERL, and so forth. But browsers proved difficult to control, and a proliferation of computational mechanisms proved inefficient and difficult to maintain. D3 substitutes a pure-Java approach for that medley: A Java client communicates (though RMI over HTTPS) with a Java-based application server. Code on the server accesses information from JDBC databases, distributed LDAP security services, and a collaborative information system. D3 is a three tier-architecture, but unlike 'E-commerce' applications, the data usage pattern suggests different strategies than traditional Enterprise Java Beans - we need to move volumes of related data together, considerable processing happens on the client, and the 'business logic' on the server-side is primarily data integration and collaboration. With D3, we are extending DARWIN to handle other data domains and to be a distributed system, where a single login allows a user

  16. TECHNICAL SUPPORT AS A BASIS OF HIGH AVAILABILITY LEVEL AND IT SYSTEM SERVICE QUALITY

    Directory of Open Access Journals (Sweden)

    Dejan Vidojevic

    2007-06-01

    Full Text Available This work presents the development and implementation methodology of technical support in IT1system operation. Methodology is developed and applied in realistic system (Information system of the Tax administration - DIS 2003, which is technically very complex and highly distributed. The results of IT system availability assessment and identification of the critical components are input parameters in the process of establishing of the technical support. The importance of technical support for achieving optimal IT system availability and IT service quality is assessed according to its operation during one year. The history of technical support system operation is a basis for further continuous improvement.

  17. A Customizable Platform for High-availability Monitoring, Control and Data Distribution at CERN

    CERN Document Server

    Braeger, M; Lang, A; Suwalska, A

    2011-01-01

    In complex operational environments, monitoring and control systems are asked to satisfy ever more stringent requirements. In addition to reliability, the availability of the system has become crucial to accommodate for tight planning schedules and increased dependencies to other systems. In this context, adapting a monitoring system to changes in its environment and meeting requests for new functionalities are increasingly challenging. Combining maintainability and high-availability within a portable architecture is the focus of this work. To meet these increased requirements, we present a new modular system developed at CERN. Using the experience gained from previous implementations, the new platform uses a multiserver architecture to allow running patches and updates to the application without affecting its availability. The data acquisition can also be reconfigured without any downtime or potential data loss. The modular architecture builds on a core system that aims to be reusable for mu...

  18. Towards Python-based Domain-specific Languages for Self-reconfigurable Modular Robotics Research

    DEFF Research Database (Denmark)

    Moghadam, Mikael; Christensen, David Johan; Brandt, David

    2011-01-01

    communication, module identification, easy software transfer and reliable module-to-module communication. The end result is a software platform for modular robots that where appropriate builds on existing work in operating systems, virtual machines, middleware and high-level languages....

  19. TinyMAPS : a lightweight Java-based mobile agent system for wireless sensor networks

    NARCIS (Netherlands)

    Aiello, F.; Fortino, G.; Galzarano, S.; Vittorioso, A.; Brazier, F.M.T.; Nieuwenhuis, K.; Pavlin, G.; Warnier, M.; Badica, C.

    2012-01-01

    In the context of the development of wireless sensor network (WSN) applications, effective programming frameworks and middlewares for rapid and efficient prototyping of resource-constrained applications are highly required. Mobile agents are an effective distributed programming paradigm that is

  20. ENCOURAGEing results on ICT for energy efficient buildings

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Skou, Arne Joachim; Olsen, Petur

    2016-01-01

    This paper presents how the ICT infrastructure developed in the European ENCOURAGE project, centered around a message oriented middleware, enabled energy savings in buildings and households. The components of the middleware, as well as the supervisory control strategy, are overviewed, to support...

  1. Designing high availability systems DFSS and classical reliability techniques with practical real life examples

    CERN Document Server

    Taylor, Zachary

    2014-01-01

    A practical, step-by-step guide to designing world-class, high availability systems using both classical and DFSS reliability techniques Whether designing telecom, aerospace, automotive, medical, financial, or public safety systems, every engineer aims for the utmost reliability and availability in the systems he, or she, designs. But between the dream of world-class performance and reality falls the shadow of complexities that can bedevil even the most rigorous design process. While there are an array of robust predictive engineering tools, there has been no single-source guide to understan

  2. Deeper snow alters soil nutrient availability and leaf nutrient status in high Arctic tundra

    DEFF Research Database (Denmark)

    Semenchuk, Philipp R.; Elberling, Bo; Amtorp, Cecilie

    2015-01-01

    season. Changing nutrient availability may be reflected in plant N and chlorophyll content and lead to increased photosynthetic capacity, plant growth, and ultimately carbon (C) assimilation by plants. In this study, we increased snow depth and thereby cold-season soil temperatures in high Arctic...... Svalbard in two vegetation types spanning three moisture regimes. We measured growing-season availability of ammonium (NH4 (+)), nitrate (NO3 (-)), total dissolved organic carbon (DOC) and nitrogen (TON) in soil; C, N, delta N-15 and chlorophyll content in Salix polaris leaves; and leaf sizes of Salix...

  3. Patterns for election of active computing nodes in high availability distributed data acquisition systems

    International Nuclear Information System (INIS)

    Nair, Preetha; Padmini, S.; Diwakar, M.P.; Gohel, Nilesh

    2013-01-01

    Computer based systems for power plant and research reactors are expected to have high availability. Redundancy is a common approach to improve the availability of a system. In redundant configuration the challenge is to select one node as active, and in case of failure of current active node provide automatic fast switchover by electing another node to function as active and restore normal operation. Additional constraints include: exactly one node should be elected as active in an n-way redundant architecture. This paper discusses various high availability configurations developed by Electronics Division and deployed in power and research reactors and patterns followed to elect active nodes of distributed data acquisition systems. The systems are categorized into two: Active/Passive where changeover takes effect only on the failure of Active node, and Active/Active, where changeover is effective in alternate cycles. A novel concept of priority driven state based Active (Master) node election pattern is described for Active/Passive systems which allows multiple redundancy and dynamic election of single master. The paper also discusses the Active/Active pattern, which uncovers failure early by activating all the nodes alternatively in a redundant system. This pattern can be extended to multiple redundant nodes. (author)

  4. Consuming Web Services on Mobile Platforms

    Directory of Open Access Journals (Sweden)

    Alin COBARZAN

    2010-01-01

    Full Text Available Web services are an emerging technology that provides interoperability between applications running in different platforms. The Web services technology provide the best approach to Service Oriented Architecture envision of component collaboration for better business re-quirements fulfilment in large enterprise systems. The challenges in implementing Web services consuming clients for low-resources mobile devices connected through unreliable wireless connections are delimited. The paper also presents a communication architecture that moves the heavy load of XML-based messaging system from the mobile clients to an external middleware component. The middleware component will act like a gateway that lightly com-municates with the device in a client-server manner over a fast binary protocol and at the same time takes the responsibility of solving the request to the Web service.

  5. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    Science.gov (United States)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  6. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    International Nuclear Information System (INIS)

    Toor, S; Eerola, P; Kraemer, O; Lindén, T; Osmani, L; Tarkoma, S; White, J

    2014-01-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  7. Intraspecific competition and high food availability are associated with insular gigantism in a lizard.

    Science.gov (United States)

    Pafilis, Panayiotis; Meiri, Shai; Foufopoulos, Johannes; Valakos, Efstratios

    2009-09-01

    Resource availability, competition, and predation commonly drive body size evolution. We assess the impact of high food availability and the consequent increased intraspecific competition, as expressed by tail injuries and cannibalism, on body size in Skyros wall lizards (Podarcis gaigeae). Lizard populations on islets surrounding Skyros (Aegean Sea) all have fewer predators and competitors than on Skyros but differ in the numbers of nesting seabirds. We predicted the following: (1) the presence of breeding seabirds (providing nutrients) will increase lizard population densities; (2) dense lizard populations will experience stronger intraspecific competition; and (3) such aggression, will be associated with larger average body size. We found a positive correlation between seabird and lizard densities. Cannibalism and tail injuries were considerably higher in dense populations. Increases in cannibalism and tail loss were associated with large body sizes. Adult cannibalism on juveniles may select for rapid growth, fuelled by high food abundance, setting thus the stage for the evolution of gigantism.

  8. Reliability and availability of high power proton accelerators

    International Nuclear Information System (INIS)

    Cho, Y.

    1999-01-01

    It has become increasingly important to address the issues of operational reliability and availability of an accelerator complex early in its design and construction phases. In this context, reliability addresses the mean time between failures and the failure rate, and availability takes into account the failure rate as well as the length of time required to repair the failure. Methods to reduce failure rates include reduction of the number of components and over-design of certain key components. Reduction of the on-line repair time can be achieved by judiciously designed hardware, quick-service spare systems and redundancy. In addition, provisions for easy inspection and maintainability are important for both reduction of the failure rate as well as reduction of the time to repair. The radiation safety exposure principle of ALARA (as low as reasonably achievable) is easier to comply with when easy inspection capability and easy maintainability are incorporated into the design. Discussions of past experience in improving accelerator availability, some recent developments, and potential R and D items are presented. (author)

  9. Multi-application inter-tile synchronization on ultra-high-resolution display walls

    KAUST Repository

    Nam, Sungwon

    2010-01-01

    Ultra-high-resolution tiled-display walls are typically driven by a cluster of computers. Each computer may drive one or more displays. Synchronization between the computers is necessary to ensure that animated imagery displayed on the wall appears seamless. Most tiled-display middleware systems are designed around the assumption that only a single application instance is running in the tiled display at a time. Therefore synchronization can be achieved with a simple solution such as a networked barrier. When a tiled display has to support multiple applications at the same time, however, the simple networked barrier approach does not scale. In this paper we propose and experimentally validate two synchronization algorithms to achieve low-latency, intertile synchronization for multiple applications with independently varying frame rates. The two-phase algorithm is more generally applicable to various highresolution tiled display systems. The one-phase algorithm provides superior results but requires support for the Network Time Protocol and is more CPU-intensive. Copyright 2010 ACM.

  10. Automation of Space Inventory Management

    Science.gov (United States)

    Fink, Patrick W.; Ngo, Phong; Wagner, Raymond; Barton, Richard; Gifford, Kevin

    2009-01-01

    This viewgraph presentation describes the utilization of automated space-based inventory management through handheld RFID readers and BioNet Middleware. The contents include: 1) Space-Based INventory Management; 2) Real-Time RFID Location and Tracking; 3) Surface Acoustic Wave (SAW) RFID; and 4) BioNet Middleware.

  11. Why are common quality and development policies needed?

    CERN Document Server

    Alandes, M; Guerrero, P

    2012-01-01

    The EMI project is based on the collaboration of four major middleware projects in Europe, all already developing middleware products and having their pre-existing strategies for developing, releasing and controlling their software artefacts. In total, the EMI project is made up of about thirty development individual teams, called “Product Teams” in EMI. A Product Team is responsible for the entire lifecycle of specific products or small groups of tightly coupled products, including the development of test-suites to be peer reviewed within the overall certification process. The Quality Assurance in EMI (European Middleware Initiative), as requested by the grid infrastructures and the EU funding agency, must support the teams in providing uniform releases and interoperable middleware distributions, with a common degree of verification and validation of the software and with metrics and objective criteria to compare product quality and evolution over time. In order to achieve these goals, the QA team in EMI...

  12. AF-TRUST, Air Force Team for Research in Ubiquitous Secure Technology

    Science.gov (United States)

    2010-07-26

    high churn without affecting the ability of correct members to disseminate messages. • Firepatch is a Fireflies-based intru si on-tolerant... Telecommunications , and Internet industries. In particular, the work on CIAO and TAO middleware for DRE systems has transitioned to the Joint

  13. Desarrollo de aplicaciones para TDT con GINGA-J

    Directory of Open Access Journals (Sweden)

    D. Alulema

    2012-11-01

    Full Text Available This article is an introduction to the events that led to Ecuador to the adoption of ISDB-Tb standard for digital terrestrial television, and is considered an analysis of the Ginga middleware, and especially Ginga-J for developing interactive applications and finally presents an application developed using Netbeans.

  14. Towards Quantifying Programmable Logic Controller Resilience Against Intentional Exploits

    Science.gov (United States)

    2012-03-22

    may improve the SCADA system’s resilience against DoS and man-in-the-middle ( MITM ) attacks. DoS attacks may be mitigated by using the redundant...paths available on the network links. MITM attacks may be mitigated by the data integrity checks associated with the middleware. Figure 4 illustrates

  15. PostgreSQL 9 high availability cookbook

    CERN Document Server

    Thomas, Shaun M

    2014-01-01

    A comprehensive series of dependable recipes to design, build, and implement a PostgreSQL server architecture free of common pitfalls that can operate for years to come. Each chapter is packed with instructions and examples to simplify even highly complex database operations. If you are a PostgreSQL DBA working on Linux systems who want a database that never gives up, this book is for you. If you've ever experienced a database outage, restored from a backup, spent hours trying to repair a malfunctioning cluster, or simply want to guarantee system stability, this book is definitely for you.

  16. A transfer technique for high mobility graphene devices on commercially available hexagonal boron nitride

    NARCIS (Netherlands)

    Zomer, P. J.; Dash, S. P.; Tombros, N.; van Wees, B. J.

    2011-01-01

    We present electronic transport measurements of single and bilayer graphene on commercially available hexagonal boron nitride. We extract mobilities as high as 125 000 cm(2) V-1 s(-1) at room temperature and 275 000 cm(2) V-1 s(-1) at 4.2 K. The excellent quality is supported by the early

  17. [Web and Automotive] shift into high gear on the Web - Intecs and ISTI-CNR position paper

    OpenAIRE

    Mambrini, Raffaella; Cordiviola, Elena; Magrini, Massimo; Martinelli, Massimo; Moroni, Davide; Pieri, Gabriele; Salvetti, Ovidio

    2012-01-01

    Intecs is an italian large enterprise privately held that operates in the following domains: aerospace, defense, transportation, telecommunications. One of the most relevant activity for Intecs is the design and development of smart systems, based on the M2M paradigm, from the sensors to the application software passing through the communication middleware, applied to the automotive and mobility markets. Intecs is active both in industrial partnerships and in national and international R&D pr...

  18. Running a reliable messaging infrastructure for CERN's control system

    International Nuclear Information System (INIS)

    Ehm, F.

    2012-01-01

    The current middle-ware for CERN's Control System is based on 2 implementations: CORBA-based Controls Middle-Ware (CMW) and Java Messaging Service (JMS). The JMS service is realized using the open source messaging product ActiveMQ and had became an increasing vital part of beam operations as data need to be transported reliably for various areas such as the beam protection system, post mortem analysis, beam commissioning or the alarm system. The current JMS service is made of 18 brokers running either in clusters or as single nodes. The main service is deployed as a 2 node cluster providing fail-over and load balancing capabilities for high availability. Non-critical applications running on virtual machines or desktop machines read data via a third broker to decouple the load from the operational main cluster. This scenario has been introduced last year and the statistics showed an up-time of 99.998% and an average data serving rate of 1.6 G-Byte per minute represented by around 150 messages per second. Deploying, running, maintaining and protecting such messaging infrastructure is not trivial and includes setting up of careful monitoring and failure pre-recognition. Naturally, lessons have been learnt and their outcome is very important for the current and future operation of such service. (author)

  19. ASPIRE: Added-value Sensing

    DEFF Research Database (Denmark)

    Anggorojati, Bayu; Cetin, Kamil; Mihovska, Albena D.

    2010-01-01

    and privacy friendly RFID middleware. Advances in active RFID integration with WSNs allow for more RFID-based applications to be developed. In order to fill the gap between the active RFID system and the existing middleware, a HAL for active reader and ALE server extension to support sensing data from active...

  20. A Globally Distributed System for Job, Data, and Information Handling for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, Gabriele [DePaul Univ., Chicago, IL (United States)

    2006-01-13

    The computing infrastructures of the modern high energy physics experiments need to address an unprecedented set of requirements. The collaborations consist of hundreds of members from dozens of institutions around the world and the computing power necessary to analyze the data produced surpasses already the capabilities of any single computing center. A software infrastructure capable of seamlessly integrating dozens of computing centers around the world, enabling computing for a large and dynamical group of users, is of fundamental importance for the production of scientific results. Such a computing infrastructure is called a computational grid. The SAM-Grid offers a solution to these problems for CDF and DZero, two of the largest high energy physics experiments in the world, running at Fermilab. The SAM-Grid integrates standard grid middleware, such as Condor-G and the Globus Toolkit, with software developed at Fermilab, organizing the system in three major components: data handling, job handling, and information management. This dissertation presents the challenges and the solutions provided in such a computing infrastructure.

  1. Recent ARC developments: Through modularity to interoperability

    International Nuclear Information System (INIS)

    Smirnova, O; Cameron, D; Ellert, M; Groenager, M; Johansson, D; Kleist, J; Dobe, P; Joenemo, J; Konya, B; Fraagaat, T; Konstantinov, A; Nilsen, J K; Saada, F Ould; Qiang, W; Read, A; Kocan, M; Marton, I; Nagy, Zs; Moeller, S; Mohn, B

    2010-01-01

    The Advanced Resource Connector (ARC) middleware introduced by NorduGrid is one of the basic Grid solutions used by scientists worldwide. While being well-proven in daily use by a wide variety of scientific applications at large-scale infrastructures like the Nordic DataGrid Facility (NDGF) and smaller scale projects, production ARC of today is still largely based on conventional Grid technologies and custom interfaces introduced a decade ago. In order to guarantee sustainability, true cross-system portability and standards-compliance based interoperability, the ARC community undertakes a massive effort of implementing modular Web Service (WS) approach into the middleware. With support from the EU KnowARC project, new components were introduced and the existing key ARC services got extended with WS technology based standard-compliant interfaces following a service-oriented architecture. Such components include the hosting environment framework, the resource-coupled execution service, the re-engineered client library, the self-healing storage solution and the peer-to-peer information system, to name a few. Gradual introduction of these new services and client tools into the production middleware releases is carried out together with NDGF and thus ensures a smooth transition to the next generation Grid middleware. Standard interfaces and modularity of the new component design are essential for ARC contributions to the planned Universal Middleware Distribution of the European Grid Initiative.

  2. Recent ARC developments: Through modularity to interoperability

    Energy Technology Data Exchange (ETDEWEB)

    Smirnova, O; Cameron, D; Ellert, M; Groenager, M; Johansson, D; Kleist, J [NDGF, Kastruplundsgade 22, DK-2770 Kastrup (Denmark); Dobe, P; Joenemo, J; Konya, B [Lund University, Experimental High Energy Physics, Institute of Physics, Box 118, SE-22100 Lund (Sweden); Fraagaat, T; Konstantinov, A; Nilsen, J K; Saada, F Ould; Qiang, W; Read, A [University of Oslo, Department of Physics, P. O. Box 1048, Blindern, N-0316 Oslo (Norway); Kocan, M [Pavol Jozef Safarik University, Faculty of Science, Jesenna 5, SK-04000 Kosice (Slovakia); Marton, I; Nagy, Zs [NIIF/HUNGARNET, Victor Hugo 18-22, H-1132 Budapest (Hungary); Moeller, S [University of Luebeck, Inst. Of Neuro- and Bioinformatics, Ratzeburger Allee 160, D-23538 Luebeck (Germany); Mohn, B, E-mail: oxana.smirnova@hep.lu.s [Uppsala University, Department of Physics and Astronomy, Div. of Nuclear and Particle Physics, Box 535, SE-75121 Uppsala (Sweden)

    2010-04-01

    The Advanced Resource Connector (ARC) middleware introduced by NorduGrid is one of the basic Grid solutions used by scientists worldwide. While being well-proven in daily use by a wide variety of scientific applications at large-scale infrastructures like the Nordic DataGrid Facility (NDGF) and smaller scale projects, production ARC of today is still largely based on conventional Grid technologies and custom interfaces introduced a decade ago. In order to guarantee sustainability, true cross-system portability and standards-compliance based interoperability, the ARC community undertakes a massive effort of implementing modular Web Service (WS) approach into the middleware. With support from the EU KnowARC project, new components were introduced and the existing key ARC services got extended with WS technology based standard-compliant interfaces following a service-oriented architecture. Such components include the hosting environment framework, the resource-coupled execution service, the re-engineered client library, the self-healing storage solution and the peer-to-peer information system, to name a few. Gradual introduction of these new services and client tools into the production middleware releases is carried out together with NDGF and thus ensures a smooth transition to the next generation Grid middleware. Standard interfaces and modularity of the new component design are essential for ARC contributions to the planned Universal Middleware Distribution of the European Grid Initiative.

  3. GMB: An Efficient Query Processor for Biological Data

    Directory of Open Access Journals (Sweden)

    Taha Kamal

    2011-06-01

    Full Text Available Bioinformatics applications manage complex biological data stored into distributed and often heterogeneous databases and require large computing power. These databases are too big and complicated to be rapidly queried every time a user submits a query, due to the overhead involved in decomposing the queries, sending the decomposed queries to remote databases, and composing the results. There is also considerable communication costs involved. This study addresses the mentioned problems in Grid-based environment for bioinformatics. We propose a Grid middleware called GMB that alleviates these problems by caching the results of Frequently Used Queries (FUQ. Queries are classified based on their types and frequencies. FUQ are answered from the middleware, which improves their response time. GMB acts as a gateway to TeraGrid Grid: it resides between users’ applications and TeraGrid Grid. We evaluate GMB experimentally.

  4. Certification of production-quality gLite Job Management components

    International Nuclear Information System (INIS)

    Andreetto, P; Bertocco, S; Dorigo, A; Frizziero, E; Gianelle, A; Sgaravatto, M; Zangrando, L; Capannini, F; Cecchi, M; Giacomini, F; Mezzadri, M; Molinari, E; Prelz, F; Rebatto, D; Monforte, S

    2011-01-01

    With the advent of the recent European Union (EU) funded projects aimed at achieving an open, coordinated and proactive collaboration among the European communities that provide distributed computing services, more strict requirements and quality standards will be asked to middleware providers. Such a highly competitive and dynamic environment, organized to comply a business-oriented model, has already started pursuing quality criteria, thus requiring to formally define rigorous procedures, interfaces and roles for each step of the software life-cycle. This will ensure quality-certified releases and updates of the Grid middleware. In the European Middleware Initiative (EMI), the release management for one or more components will be organized into Product Team (PT) units, fully responsible for delivering production ready, quality-certified software and for coordinating each other to contribute to the EMI release as a whole. This paper presents the certification process, with respect to integration, installation, configuration and testing, adopted at INFN by the Product Team responsible for the gLite Web-Service based Computing Element (CREAM CE) and for the Workload Management System (WMS). The used resources, the testbeds layout, the integration and deployment methods, the certification steps to provide feedback to developers and to grant quality results are described.

  5. Certification of production-quality gLite Job Management components

    Science.gov (United States)

    Andreetto, P.; Bertocco, S.; Capannini, F.; Cecchi, M.; Dorigo, A.; Frizziero, E.; Giacomini, F.; Gianelle, A.; Mezzadri, M.; Molinari, E.; Monforte, S.; Prelz, F.; Rebatto, D.; Sgaravatto, M.; Zangrando, L.

    2011-12-01

    With the advent of the recent European Union (EU) funded projects aimed at achieving an open, coordinated and proactive collaboration among the European communities that provide distributed computing services, more strict requirements and quality standards will be asked to middleware providers. Such a highly competitive and dynamic environment, organized to comply a business-oriented model, has already started pursuing quality criteria, thus requiring to formally define rigorous procedures, interfaces and roles for each step of the software life-cycle. This will ensure quality-certified releases and updates of the Grid middleware. In the European Middleware Initiative (EMI), the release management for one or more components will be organized into Product Team (PT) units, fully responsible for delivering production ready, quality-certified software and for coordinating each other to contribute to the EMI release as a whole. This paper presents the certification process, with respect to integration, installation, configuration and testing, adopted at INFN by the Product Team responsible for the gLite Web-Service based Computing Element (CREAM CE) and for the Workload Management System (WMS). The used resources, the testbeds layout, the integration and deployment methods, the certification steps to provide feedback to developers and to grant quality results are described.

  6. Development of high-availability ATCA/PCIe data acquisition instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Correia, Miguel; Sousa, Jorge; Batista, Antonio J.N.; Combo, Alvaro; Santos, Bruno; Rodrigues, Antonio P.; Carvalho, Paulo F.; Goncalves, Bruno [Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade de Lisboa, 1049- 001 Lisboa (Portugal); Correia, Carlos M.B.A. [Centro de Instrumentacao, Dept. de Fisica, Universidade de Coimbra, 3004-516 Coimbra (Portugal)

    2015-07-01

    Latest Fusion energy experiments envision a quasi-continuous operation regime. In consequence, the largest experimental devices, currently in development, specify high-availability (HA) requirements for the whole plant infrastructure. HA features enable the whole facility to perform seamlessly in the case of failure of any of its components, coping with the increasing duration of plasma discharges (steady-state) and assuring safety of equipment, people, environment and investment. IPFN developed a control and data acquisition system, aiming for fast control of advanced Fusion devices, which is thus required to provide such HA features. The system is based on in-house developed Advanced Telecommunication Computing Architecture (ATCA) instrumentation modules - IO blades and data switch blades, establishing a PCIe network on the ATCA shelf's back-plane. The data switch communicates to an external host computer through a PCIe data network. At the hardware management level, the system architecture takes advantage of ATCA native redundancy and hot swap specifications to implement fail-over substitution of IO or data switch blades. A redundant host scheme is also supported by the ATCA/PCIe platform. At the software level, PCIe provides implementation of hot plug services, which translate the hardware changes to the corresponding software/operating system devices. The paper presents how the ATCA and PCIe based system can be setup to perform with the desired degree of HA, thus being suitable for advanced Fusion control and data acquisition systems. (authors)

  7. Development of high-availability ATCA/PCIe data acquisition instrumentation

    International Nuclear Information System (INIS)

    Correia, Miguel; Sousa, Jorge; Batista, Antonio J.N.; Combo, Alvaro; Santos, Bruno; Rodrigues, Antonio P.; Carvalho, Paulo F.; Goncalves, Bruno; Correia, Carlos M.B.A.

    2015-01-01

    Latest Fusion energy experiments envision a quasi-continuous operation regime. In consequence, the largest experimental devices, currently in development, specify high-availability (HA) requirements for the whole plant infrastructure. HA features enable the whole facility to perform seamlessly in the case of failure of any of its components, coping with the increasing duration of plasma discharges (steady-state) and assuring safety of equipment, people, environment and investment. IPFN developed a control and data acquisition system, aiming for fast control of advanced Fusion devices, which is thus required to provide such HA features. The system is based on in-house developed Advanced Telecommunication Computing Architecture (ATCA) instrumentation modules - IO blades and data switch blades, establishing a PCIe network on the ATCA shelf's back-plane. The data switch communicates to an external host computer through a PCIe data network. At the hardware management level, the system architecture takes advantage of ATCA native redundancy and hot swap specifications to implement fail-over substitution of IO or data switch blades. A redundant host scheme is also supported by the ATCA/PCIe platform. At the software level, PCIe provides implementation of hot plug services, which translate the hardware changes to the corresponding software/operating system devices. The paper presents how the ATCA and PCIe based system can be setup to perform with the desired degree of HA, thus being suitable for advanced Fusion control and data acquisition systems. (authors)

  8. Ten years of European Grids: What have we learnt?

    International Nuclear Information System (INIS)

    Burke, Stephen

    2011-01-01

    The European DataGrid project started in 2001, and was followed by the three phases of EGEE and the recent transition to EGI. This paper discusses the history of both middleware development and Grid operations in these projects, and in particular the impact on the development of the LHC Computing Grid. It considers to what extent the initial ambitions have been realised, which aspects have been successful and what lessons can be derived from the things which were less so, both in technical and sociological terms. In particular it considers the middleware technologies used for data management, workload management, information systems and security, and the difficulties of operating a highly distributed worldwide production infrastructure, drawing on practical experience with many aspects of the various Grid projects over the last decade.

  9. EMI Security Architecture

    CERN Document Server

    White, J.; Schuller, B.; Qiang, W.; Groep, D.; Koeroo, O.; Salle, M.; Sustr, Z.; Kouril, D.; Millar, P.; Benedyczak, K.; Ceccanti, A.; Leinen, S.; Tschopp, V.; Fuhrmann, P.; Heyman, E.; Konstantinov, A.

    2013-01-01

    This document describes the various architectures of the three middlewares that comprise the EMI software stack. It also outlines the common efforts in the security area that allow interoperability between these middlewares. The assessment of the EMI Security presented in this document was performed internally by members of the Security Area of the EMI project.

  10. Advantages of pre-production WLCG/EGEE services for VOs and users

    CERN Document Server

    Thackray, N; Fernandez Sanchez, C; Freire Garcia, E; Simon Garcia, A; Lopez Cacheiro, J

    2008-01-01

    The benefits that the Pre-Production Service (PPS) can offer to VOs and new users will be shown. Current PPS processes will be described so they have a better understanding of the PPS environment. PPS offers a proper place where VOs can test their applications and check their compatibility with upcoming middleware releases. PPS could help VOs improve their applications and at the same time, VOs would be helping PPS to produce better tested middleware releases that would go into Production (PROD) Currently a well-defined set of procedures is followed in PPS, beginning with the pre-deployment activity and finishing with the approval of a new middleware release that can go into PROD. The PPS Coordination Team takes care of supervising all these steps, mainly trying to spot possible bugs in the middleware before it goes into PROD. Unfortunately, VO contribution in this part of the deployment is currently very limited and the PPS team does not have a full feedback from these important groups. One of the problems i...

  11. CMS Results of Grid-related activities using the early deployed LCG Implementations

    CERN Document Server

    Coviello, Tommaso; De Filippis, Nicola; Donvito, Giacinto; Maggi, Giorgio; Pierro, A; Bonacorsi, Daniele; Capiluppi, Paolo; Fanfani, Alessandra; Grandi, Claudio; Maroney, Owen; Nebrensky, H; Donno, Flavia; Jank, Werner; Sciabà, Andrea; Sinanis, Nick; Colling, David; Tallini, Hugh; MacEvoy, Barry C; Wang, Shaowen; Kaiser, Joseph; Osman, Asif; Charlot, Claude; Semenjouk, I; Biasotto, Massimo; Fantinel, Sergio; Corvo, Marco; Fanzago, Federica; Mazzucato, Mirco; Verlato, Marco; Go, Apollo; Khan Chia Ming; Andreozzi, S; Cavalli, A; Ciaschini, V; Ghiselli, A; Italiano, A; Spataro, F; Vistoli, C; Tortone, G

    2004-01-01

    The CMS Experiment is defining its Computing Model and is experimenting and testing the new distributed features offered by many Grid Projects. This report describes use by CMS of the early-deployed systems of LCG (LCG-0 and LCG-1). Most of the used features here discussed came from the EU implemented middleware, even if some of the tested capabilities were in common with the US developed middleware. This report describes the simulation of about 2 million of CMS detector events, which were generated as part of the official CMS Data Challenge 04 (Pre-Challenge-Production). The simulations were done on a CMS-dedicated testbed (CMS-LCG-0), where an ad-hoc modified version of the LCG-0 middleware was deployed and where the CMS Experiment had a complete control, and on the official early LCG delivered system (with the LCG-1 version). Modifications to the CMS simulation tools for events produc tion where studied and achieved, together with necessary adaptations of the middleware services. Bilateral feedback (betwee...

  12. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  13. A Sleep-Awake Scheme Based on CoAP for Energy-Efficiency in Internet of Things

    Directory of Open Access Journals (Sweden)

    Wenquan Jin

    2017-11-01

    Full Text Available Internet Engineering Task Force (IETF have developed Constrained Application Protocol (CoAP to enable communication between sensor or actuator nodes in constrained environments, such as small amount of memory, and low power. IETF CoAP and HTTP are used to monitor or control environments in Internet of Things (IoT and Machine-to-Machine (M2M. In this paper, we present a sleep-awake scheme based on CoAP for energy efficiency in Internet of Things. This scheme supports to increase energy efficiency of IoT nodes using CoAP protocol. We have slightly modified the IoT middleware to improve CoAP protocol to conserve energy in the IoT nodes. Also, the IoT middleware includes some functionality of the CoRE Resource Directory (RD and the Message Queue (MQ broker with IoT nodes to synchronize sleepy status.

  14. Middleware Technologies for Cloud of Things - a survey

    OpenAIRE

    Farahzadia, Amirhossein; Shams, Pooyan; Rezazadeh, Javad; Farahbakhsh, Reza

    2017-01-01

    The next wave of communication and applications rely on the new services provided by Internet of Things which is becoming an important aspect in human and machines future. The IoT services are a key solution for providing smart environments in homes, buildings and cities. In the era of a massive number of connected things and objects with a high grow rate, several challenges have been raised such as management, aggregation and storage for big produced data. In order to tackle some of these is...

  15. Availability and Quality of Family Planning Services in the Democratic Republic of the Congo: High Potential for Improvement.

    Science.gov (United States)

    Mpunga, Dieudonné; Lumbayi, J P; Dikamba, Nelly; Mwembo, Albert; Ali Mapatano, Mala; Wembodinga, Gilbert

    2017-06-27

    To determine the availability and quality of family planning services within health facilities throughout the Democratic Republic of the Congo (DRC). Data were collected for the cross-sectional study from April 2014 to June 2014 by the Ministry of Public Health. A total of 1,568 health facilities that reported data to the National Health Information System were selected by multistage random sampling in the 11 provinces of the DRC existing at that time. Data were collected through interviews, document review, and direct observation. Two dependent variables were measured: availability of family planning services (consisting of a room for services, staff assigned to family planning, and evidence of client use of family planning) and quality of family planning services (assessed as "high" if the facility had at least 1 trained staff member, family planning service delivery guidelines, at least 3 types of methods, and a sphygmomanometer, or "low" if the facility did not meet any of these 4 criteria). Pearson's chi-square test and odds ratios (ORs) were used to test for significant associations, using the alpha significance level of .05. We successfully surveyed 1,555 facilities (99.2%) of those included in the sample. One in every 3 facilities (33%) offered family planning services as assessed by the index of availability, of which 20% met all 4 criteria for providing high-quality services. Availability was greatest at the highest level of the health system (hospitals) and decreased incrementally with each health system level, with disparities between provinces and urban and rural areas. Facilities in urban areas were more likely than in rural areas to meet the standard for high-quality services ( P facilities were less likely than private facilities to have high-quality services ( P =.02). Among all 1,555 facilities surveyed, 14% had at least 3 types of methods available at the time of the survey; the most widely available methods were male condoms, combined oral

  16. Cassandra high availability

    CERN Document Server

    Strickland, Robbie

    2014-01-01

    If you are a developer or DevOps engineer who understands the basics of Cassandra and are ready to take your knowledge to the next level, then this book is for you. An understanding of the essentials of Cassandra is needed.

  17. Important aspects for consideration in minimizing plant outage times. Swiss experience in achieving high availability

    International Nuclear Information System (INIS)

    Malcotsis, G.

    1984-01-01

    Operation of Swiss nuclear power plants has not been entirely free of trouble. They have experienced defective fuel elements, steam generator tube damage, excessive vibration of the core components, leakages in the recirculation pump seals and excessive corrosion and erosion in the steam-feedwater plant. Despite these technical problems in the early life of the plants, on overall balance the plants can be considered to have performed exceedingly well. The safety records from more than 40 reactor-years of operation are excellent and, individually and collectively, the capacity factors obtained are among the highest in the world. The problems mentioned have been solved and the plants continue operation with high availabilities. This success can be attributed to the good practices of the utilities with regard to the choice of special design criteria, plant organization, plant operation and plant maintenance, and also to the pragmatic approach of the licensing authorities and their consultants to quality assurance and quality control. The early technical problems encountered, the corresponding solutions adopted and the factors that contributed towards achieving high availabilities in Swiss nuclear power plants are briefly described. (author)

  18. Energy Management in Industrial Plants

    Directory of Open Access Journals (Sweden)

    Dario Bruneo

    2012-09-01

    Full Text Available The Smart Grid vision imposes a new approach towards energy supply that is more affordable, reliable and sustainable. The core of this new vision is the use of advanced technology to monitor power system dynamics in real time and identify system in stability. In order to implement strategic vision for energy management, it is possible to identify three main areas of investigation such as smart generation, smart grid and smart customer. Focusing on the latter topic, in this paper we present an application specifically designed to monitor an industrial site with particular attention to power consumption. This solution is a real time analysis tool, able to produce useful results to have a strategic approach in the energy market and to provide statistic analysis useful for the future choices of the industrial company. The application is based on a three layers architecture. The technological layer uses a Wireless Sensor Network (WSN to acquire data from the electrical substations. The middleware layer faces the integration problems by processing the raw data. The application layer manages the data acquired from the sensors. This WSN based architecture represents an interesting example of a low cost and non-invasive monitoring application to keep the energy consumption of an industrial site under control. Some of the added value features of the proposed solution are the routing network protocol, selected in order to have an high availability of the WSN, and the use of the WhereX middleware, able to easily implement integration among the different architectural parts.

  19. Central noradrenaline transporter availability in highly obese, non-depressed individuals

    Energy Technology Data Exchange (ETDEWEB)

    Hesse, Swen; Sabri, Osama [University of Leipzig, Department of Nuclear Medicine, Leipzig (Germany); Leipzig University Medical Centre, Integrated Treatment and Research Centre (IFB) Adiposity Diseases, Leipzig (Germany); Becker, Georg-Alexander; Bresch, Anke; Luthardt, Julia; Patt, Marianne; Meyer, Philipp M. [University of Leipzig, Department of Nuclear Medicine, Leipzig (Germany); Rullmann, Michael [University of Leipzig, Department of Nuclear Medicine, Leipzig (Germany); Leipzig University Medical Centre, Integrated Treatment and Research Centre (IFB) Adiposity Diseases, Leipzig (Germany); Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig (Germany); Hankir, Mohammed K.; Zientek, Franziska; Reissig, Georg; Fenske, Wiebke K. [Leipzig University Medical Centre, Integrated Treatment and Research Centre (IFB) Adiposity Diseases, Leipzig (Germany); Arelin, Katrin [Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig (Germany); University of Leipzig, Day Clinic for Cognitive Neurology, Leipzig (Germany); Lobsien, Donald [University of Leipzig, Department of Neuroradiology, Leipzig (Germany); Mueller, Ulrich [University of Cambridge, Department of Psychiatry and Behavioural and Clinical Neuroscience Institute, Cambridge (United Kingdom); Baldofski, S.; Hilbert, Anja [Leipzig University Medical Centre, Integrated Treatment and Research Centre (IFB) Adiposity Diseases, Leipzig (Germany); University of Leipzig, Department of Medical Psychology and Medical Sociology, Leipzig (Germany); Blueher, Matthias [University of Leipzig, Department of Internal Medicine, Leipzig (Germany); Fasshauer, Mathias; Stumvoll, Michael [Leipzig University Medical Centre, Integrated Treatment and Research Centre (IFB) Adiposity Diseases, Leipzig (Germany); University of Leipzig, Department of Internal Medicine, Leipzig (Germany); Ding, Yu-Shin [New York University School of Medicine, Departments of Radiology and Psychiatry, New York, NY (United States)

    2017-06-15

    The brain noradrenaline (NA) system plays an important role in the central nervous control of energy balance and is thus implicated in the pathogenesis of obesity. The specific processes modulated by this neurotransmitter which lead to obesity and overeating are still a matter of debate. We tested the hypothesis that in vivo NA transporter (NAT) availability is changed in obesity by using positron emission tomography (PET) and S,S-[{sup 11}C]O-methylreboxetine (MRB) in twenty subjects comprising ten highly obese (body mass index BMI > 35 kg/m{sup 2}), metabolically healthy, non-depressed individuals and ten non-obese (BMI < 30 kg/m{sup 2}) healthy controls. Overall, we found no significant differences in binding potential (BP{sub ND}) values between obese and non-obese individuals in the investigated brain regions, including the NAT-rich thalamus (0.40 ± 0.14 vs. 0.41 ± 0.18; p = 0.84) though additional discriminant analysis correctly identified individual group affiliation based on regional BP{sub ND} in all but one (control) case. Furthermore, inter-regional correlation analyses indicated different BP{sub ND} patterns between both groups but this did not survive testing for multiple comparions. Our data do not find an overall involvement of NAT changes in human obesity. However, preliminary secondary findings of distinct regional and associative patterns warrant further investigation. (orig.)

  20. Central noradrenaline transporter availability in highly obese, non-depressed individuals

    International Nuclear Information System (INIS)

    Hesse, Swen; Sabri, Osama; Becker, Georg-Alexander; Bresch, Anke; Luthardt, Julia; Patt, Marianne; Meyer, Philipp M.; Rullmann, Michael; Hankir, Mohammed K.; Zientek, Franziska; Reissig, Georg; Fenske, Wiebke K.; Arelin, Katrin; Lobsien, Donald; Mueller, Ulrich; Baldofski, S.; Hilbert, Anja; Blueher, Matthias; Fasshauer, Mathias; Stumvoll, Michael; Ding, Yu-Shin

    2017-01-01

    The brain noradrenaline (NA) system plays an important role in the central nervous control of energy balance and is thus implicated in the pathogenesis of obesity. The specific processes modulated by this neurotransmitter which lead to obesity and overeating are still a matter of debate. We tested the hypothesis that in vivo NA transporter (NAT) availability is changed in obesity by using positron emission tomography (PET) and S,S-["1"1C]O-methylreboxetine (MRB) in twenty subjects comprising ten highly obese (body mass index BMI > 35 kg/m"2), metabolically healthy, non-depressed individuals and ten non-obese (BMI < 30 kg/m"2) healthy controls. Overall, we found no significant differences in binding potential (BP_N_D) values between obese and non-obese individuals in the investigated brain regions, including the NAT-rich thalamus (0.40 ± 0.14 vs. 0.41 ± 0.18; p = 0.84) though additional discriminant analysis correctly identified individual group affiliation based on regional BP_N_D in all but one (control) case. Furthermore, inter-regional correlation analyses indicated different BP_N_D patterns between both groups but this did not survive testing for multiple comparions. Our data do not find an overall involvement of NAT changes in human obesity. However, preliminary secondary findings of distinct regional and associative patterns warrant further investigation. (orig.)

  1. Replica consistency in a Data Grid

    International Nuclear Information System (INIS)

    Domenici, Andrea; Donno, Flavia; Pucciani, Gianni; Stockinger, Heinz; Stockinger, Kurt

    2004-01-01

    A Data Grid is a wide area computing infrastructure that employs Grid technologies to provide storage capacity and processing power to applications that handle very large quantities of data. Data Grids rely on data replication to achieve better performance and reliability by storing copies of data sets on different Grid nodes. When a data set can be modified by applications, the problem of maintaining consistency among existing copies arises. The consistency problem also concerns metadata, i.e., additional information about application data sets such as indices, directories, or catalogues. This kind of metadata is used both by the applications and by the Grid middleware to manage the data. For instance, the Replica Management Service (the Grid middleware component that controls data replication) uses catalogues to find the replicas of each data set. Such catalogues can also be replicated and their consistency is crucial to the correct operation of the Grid. Therefore, metadata consistency generally poses stricter requirements than data consistency. In this paper we report on the development of a Replica Consistency Service based on the middleware mainly developed by the European Data Grid Project. The paper summarises the main issues in the replica consistency problem, and lays out a high-level architectural design for a Replica Consistency Service. Finally, results from simulations of different consistency models are presented

  2. Data Management as a Cluster Middleware Centerpiece

    Science.gov (United States)

    Zero, Jose; McNab, David; Sawyer, William; Cheung, Samson; Duffy, Daniel; Rood, Richard; Webster, Phil; Palm, Nancy; Salmon, Ellen; Schardt, Tom

    2004-01-01

    Through earth and space modeling and the ongoing launches of satellites to gather data, NASA has become one of the largest producers of data in the world. These large data sets necessitated the creation of a Data Management System (DMS) to assist both the users and the administrators of the data. Halcyon Systems Inc. was contracted by the NASA Center for Computational Sciences (NCCS) to produce a Data Management System. The prototype of the DMS was produced by Halcyon Systems Inc. (Halcyon) for the Global Modeling and Assimilation Office (GMAO). The system, which was implemented and deployed within a relatively short period of time, has proven to be highly reliable and deployable. Following the prototype deployment, Halcyon was contacted by the NCCS to produce a production DMS version for their user community. The system is composed of several existing open source or government-sponsored components such as the San Diego Supercomputer Center s (SDSC) Storage Resource Broker (SRB), the Distributed Oceanographic Data System (DODS), and other components. Since Data Management is one of the foremost problems in cluster computing, the final package not only extends its capabilities as a Data Management System, but also to a cluster management system. This Cluster/Data Management System (CDMS) can be envisioned as the integration of existing packages.

  3. IBM Demonstrates a General-Purpose, High-Performance, High-Availability Cloud-Hosted Data Distribution System With Live GOES-16 Weather Satellite Data

    Science.gov (United States)

    Snyder, P. L.; Brown, V. W.

    2017-12-01

    IBM has created a general purpose, data-agnostic solution that provides high performance, low data latency, high availability, scalability, and persistent access to the captured data, regardless of source or type. This capability is hosted on commercially available cloud environments and uses much faster, more efficient, reliable, and secure data transfer protocols than the more typically used FTP. The design incorporates completely redundant data paths at every level, including at the cloud data center level, in order to provide the highest assurance of data availability to the data consumers. IBM has been successful in building and testing a Proof of Concept instance on our IBM Cloud platform to receive and disseminate actual GOES-16 data as it is being downlinked. This solution leverages the inherent benefits of a cloud infrastructure configured and tuned for continuous, stable, high-speed data dissemination to data consumers worldwide at the downlink rate. It also is designed to ingest data from multiple simultaneous sources and disseminate data to multiple consumers. Nearly linear scalability is achieved by adding servers and storage.The IBM Proof of Concept system has been tested with our partners to achieve in excess of 5 Gigabits/second over public internet infrastructure. In tests with live GOES-16 data, the system routinely achieved 2.5 Gigabits/second pass-through to The Weather Company from the University of Wisconsin-Madison SSEC. Simulated data was also transferred from the Cooperative Institute for Climate and Satellites — North Carolina to The Weather Company, as well. The storage node allocated to our Proof of Concept system as tested was sized at 480 Terabytes of RAID protected disk as a worst case sizing to accommodate the data from four GOES-16 class satellites for 30 days in a circular buffer. This shows that an abundance of performance and capacity headroom exists in the IBM design that can be applied to additional missions.

  4. Machine Protection: Availability for Particle Accelerators

    CERN Document Server

    Apollonio, Andrea; Schmidt, Ruediger

    2015-03-16

    Machine availability is a key indicator for the performance of the next generation of particle accelerators. Availability requirements need to be carefully considered during the design phase to achieve challenging objectives in different fields, as e.g. particle physics and material science. For existing and future High-Power facilities, such as ESS (European Spallation Source) and HL-LHC (High-Luminosity LHC), operation with unprecedented beam power requires highly dependable Machine Protection Systems (MPS) to avoid any damage-induced downtime. Due to the high complexity of accelerator systems, finding the optimal balance between equipment safety and accelerator availability is challenging. The MPS architecture, as well as the choice of electronic components, have a large influence on the achievable level of availability. In this thesis novel methods to address the availability of accelerators and their protection systems are presented. Examples of studies related to dependable MPS architectures are given i...

  5. Mobile Service Platform: A Middleware for Nomadic Mobile Service Provisioning

    NARCIS (Netherlands)

    van Halteren, Aart; Pawar, P.

    Ongoing miniaturization and power efficiency of mobile devices have led to widespread availability of devices that have an increasing amount of processing power and storage, and that support multiple wireless network interfaces connecting to various auxiliary devices and to the Internet. It is now

  6. The Java Management Extensions (JMX) Is Your Cluster Ready for Evolution?

    CERN Document Server

    Jaén-Martínez, J

    2000-01-01

    The arrival of commodity hardware configurations with performance rivaling that offered by RISC workstations is resulting in important advances in the state of the art of building and running very large scalable clusters at "mass market" pricing levels. However, cluster middleware layers are still considered as static infrastructures which are not ready for evolution. In this paper, we claim that middleware layers based on both agent and Java technologies offer new opportunities to support clusters where services can be dynamically added, removed and reconfigured. To support this claim, we present the Java Management Extensions (JMX), a new Java agent based technology, and its application to implement two disjoint cluster management middleware services (a remote reboot service and a distributed infrastructure for collecting Log events) which share a unique agent-based infrastructure.

  7. A business model for the establishment of the European grid infrastructure

    International Nuclear Information System (INIS)

    Candiello, A; Cresti, D; Ferrari, T; Mazzucato, M; Perini, L

    2010-01-01

    An international grid has been built in Europe during the past years in the framework of various EC-funded projects to support the growth of e-Science. After several years of work spent to increase the scale of the infrastructure, to expand the user community and improve the availability of the services delivered, effort is now concentrating on the creation of a new organizational model, capable of fulfilling the vision of a sustainable European grid infrastructure. The European Grid Initiative (EGI) is the proposed framework to seamlessly link at a global level the European national grid e-Infrastructures operated by the National Grid Initiatives and European International Research Organizations, and based on a European Unified Middleware Distribution, which will be the result of a joint effort of various European grid Middleware Consortia. This paper describes the requirements that EGI addresses, the actors contributing to its foundation, the offering and the organizational structure that constitute the EGI business model.

  8. Arquitectura de almacenamiento masivo de datos en la infraestructura Grid usando el middlewareglite

    Directory of Open Access Journals (Sweden)

    Iván Fernando Gómez Pedraza

    2012-12-01

    Full Text Available Nowadays, the increase in research projects at universities requires high-computing and mass-storage equipment, creating the need for having a supercomputing infrastructure. This work attempts to find a solution to the storage problems that arise in the different research groups at universities. Massive-storage infrastructures, which use a file system compatible with EGEE middleware, are implemented, taking advantage of the storage space left by working nodes. This should offer a very low-price solution. Additionally, a cluster with fundamental Grid services is created and thus integrated to the intercontinental infrastructure of EELA-2, establishing a platform for distributed computing and storage intended for the scientific community at Universidad Industrial de Santander.

  9. HAUTO: Automated composition of convergent services based in HTN planning

    Directory of Open Access Journals (Sweden)

    Armando Ordoñez

    2014-01-01

    Full Text Available This paper presents HAUTO, a framework able to compose convergent services automatically. HAUTO is based in HTN (hierarchical task networks Automated Planning and is composed of three modules: a request processing module that transforms natural language and context information into a planning instance, the automated composition module based on HTN planning and the execution environment for convergent (Web and telecom services. The integration of a planning component provides two basic functionalities: the possibility of customizing the composition of services using the user context information and a middleware level that integrates the execution of services in high performance telecom environments. Finally, a prototype in environmental early warning management is presented as a test case.

  10. Vanderbilt University Institute of Imaging Science Center for Computational Imaging XNAT: A multimodal data archive and processing environment.

    Science.gov (United States)

    Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A

    2016-01-01

    The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. The GRID seminar

    CERN Multimedia

    CERN. Geneva HR-RFA

    2006-01-01

    The Grid infrastructure is a key part of the computing environment for the simulation, processing and analysis of the data of the LHC experiments. These experiments depend on the availability of a worldwide Grid infrastructure in several aspects of their computing model. The Grid middleware will hide much of the complexity of this environment to the user, organizing all the resources in a coherent virtual computer center. The general description of the elements of the Grid, their interconnections and their use by the experiments will be exposed in this talk. The computational and storage capability of the Grid is attracting other research communities beyond the high energy physics. Examples of these applications will be also exposed during the presentation.

  12. EgoSENSE: A Framework for Context-Aware Mobile Applications Development

    Directory of Open Access Journals (Sweden)

    E. M. Milic

    2017-08-01

    Full Text Available This paper presents a context-aware mobile framework (or middleware, intended to support the implementation of context-aware mobile services. The overview of basic concepts, architecture and components of context-aware mobile framework is given. The mobile framework provide acquisition and management of context, where raw data sensed from physical (hardware sensors and virtual (software sensors are combined, processed and analyzed to provide high-level context and situation of the user to the mobile context-aware applications in near real-time. Using demo mobile health application, its most important components and functions, such as these supposed to detect urgent or alarming health conditions of a mobile user and to initiate appropriate actions demonstrated.

  13. Cigarette availability and price in low and high socioeconomic areas.

    Science.gov (United States)

    Dalglish, Emma; McLaughlin, Deirdre; Dobson, Annette; Gartner, Coral

    2013-08-01

    To determine whether tobacco retailer density and cigarette prices differ between low and high socioeconomic status suburbs in South-East Queensland. A survey of retail outlets selling cigarettes was conducted in selected suburbs over a two-day period. The suburbs were identified by geographical cluster sampling based on their Index of Relative Socio-economic Advantage and Disadvantage score and size of retail complex within the suburb. All retail outlets within the suburb were visited and the retail prices for the highest ranking Australian brands were recorded at each outlet. A significant relationship was found between Index of Relative Socioeconomic Advantage and Disadvantage score (in deciles) and the number of tobacco retail outlets (r=0.93, p=0.003), with the most disadvantaged suburbs having a greater number of tobacco retailers. Results also demonstrate that cigarettes were sold in a broader range of outlets in suburbs of low SES. The average price of the packs studied was significantly lower in the most disadvantaged suburbs compared to the most advantaged. While cigarettes were still generally cheaper in the most disadvantaged suburbs, the difference was no longer statistically significant when the average price of cigarette packs was compared according to outlet type (supermarket, newsagent, etc). In South-East Queensland, cigarettes are more widely available in the most disadvantaged suburbs and at lower prices than in the most advantaged suburbs. © 2013 The Authors. ANZJPH © 2013 Public Health Association of Australia.

  14. Low cost highly available digital control computer

    International Nuclear Information System (INIS)

    Silvers, M.W.

    1986-01-01

    When designing digital controllers for critical plant control it is important to provide several features. Among these are reliability, availability, maintainability, environmental protection, and low cost. An examination of several applications has lead to a design that can be produced for approximately $20,000 (1000 control points). This design is compatible with modern concepts in distributed and hierarchical control. The canonical controller element is a dual-redundant self-checking computer that communicates with a cross-strapped, electrically isolated input/output system. The input/output subsystem comprises multiple intelligent input/output cards. These cards accept commands from the primary processor which are validated, executed, and acknowledged. Each card may be hot replaced to facilitate sparing. The implementation of the dual-redundant computer architecture is discussed. Called the FS-86, this computer can be used for a variety of applications. It has most recently found application in the upgrade of San Francisco's Bay Area Rapid Transit (BART) train control currently in progress and has been proposed for feedwater control in a boiling water reactor

  15. Excessive Iron Availability Caused by Disorders of Interleukin-10 and Interleukin-22 Contributes to High Altitude Polycythemia

    Directory of Open Access Journals (Sweden)

    Yun-Sheng Liu

    2018-05-01

    Full Text Available Background: Because the pathogenesis of high altitude polycythemia (HAPC is unclear, the aim of the present study was to explore whether abnormal iron metabolism is involved in the pathogenesis of HAPC and the possible cause.Methods: We examined the serum levels of iron, total iron binding capacity, soluble transferrin receptor (sTfR, ferritin, and hepcidin as well as erythropoietin (EPO and inflammation-related cytokines in 20 healthy volunteers at sea level, 36 healthy high-altitude migrants, and 33 patients with HAPC. Mice that were exposed to a simulated hypoxic environment at an altitude of 5,000 m for 4 weeks received exogenous iron or intervention on cytokines, and the iron-related and hematological indices of peripheral blood and bone marrow were detected. The in vitro effects of some cytokines on hematopoietic cells were also observed.Results: Iron mobilization and utilization were enhanced in people who had lived at high altitudes for a long time. Notably, both the iron storage in ferritin and the available iron in the blood were elevated in patients with HAPC compared with the healthy high-altitude migrants. The correlation analysis indicated that the decreased hepcidin may have contributed to enhanced iron availability in HAPC, and decreased interleukin (IL-10 and IL-22 were significantly associated with decreased hepcidin. The results of the animal experiments confirmed that a certain degree of iron redundancy may promote bone marrow erythropoiesis and peripheral red blood cell production in hypoxic mice and that decreased IL-10 and IL-22 stimulated iron mobilization during hypoxia by affecting hepcidin expression.Conclusion: These data demonstrated, for the first time, that an excess of obtainable iron caused by disordered IL-10 and IL-22 was involved in the pathogenesis of some HAPC patients. The potential benefits of iron removal and immunoregulation for the prevention and treatment of HAPC deserve further research.

  16. miRQuest: integration of tools on a Web server for microRNA research.

    Science.gov (United States)

    Aguiar, R R; Ambrosio, L A; Sepúlveda-Hermosilla, G; Maracaja-Coutinho, V; Paschoal, A R

    2016-03-28

    This report describes the miRQuest - a novel middleware available in a Web server that allows the end user to do the miRNA research in a user-friendly way. It is known that there are many prediction tools for microRNA (miRNA) identification that use different programming languages and methods to realize this task. It is difficult to understand each tool and apply it to diverse datasets and organisms available for miRNA analysis. miRQuest can easily be used by biologists and researchers with limited experience with bioinformatics. We built it using the middleware architecture on a Web platform for miRNA research that performs two main functions: i) integration of different miRNA prediction tools for miRNA identification in a user-friendly environment; and ii) comparison of these prediction tools. In both cases, the user provides sequences (in FASTA format) as an input set for the analysis and comparisons. All the tools were selected on the basis of a survey of the literature on the available tools for miRNA prediction. As results, three different cases of use of the tools are also described, where one is the miRNA identification analysis in 30 different species. Finally, miRQuest seems to be a novel and useful tool; and it is freely available for both benchmarking and miRNA identification at http://mirquest.integrativebioinformatics.me/.

  17. Experience and Lessons learnt from running High Availability Databases on Network Attached Storage

    CERN Document Server

    Guijarro, Manuel

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two private LANs (Local Area Network), one to provide access to the NAS filers and a second to implement the Oracle RAC private interconnect, both using Network Bonding. NAS filers are configured in partnership to prevent having single points of failure and to provide automatic NAS filer fail-over.

  18. Experience and lessons learnt from running high availability databases on network attached storage

    International Nuclear Information System (INIS)

    Guijarro, M; Gaspar, R

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two private LANs (Local Area Network), one to provide access to the NAS filers and a second to implement the Oracle RAC private interconnect, both using Network Bonding. NAS filers are configured in partnership to prevent having single points of failure and to provide automatic NAS filer fail-over

  19. Ezilla Cloud Service with Cassandra Database for Sensor Observation System

    OpenAIRE

    Kuo-Yang Cheng; Yi-Lun Pan; Chang-Hsing Wu; His-En Yu; Hui-Shan Chen; Weicheng Huang

    2012-01-01

    The main mission of Ezilla is to provide a friendly interface to access the virtual machine and quickly deploy the high performance computing environment. Ezilla has been developed by Pervasive Computing Team at National Center for High-performance Computing (NCHC). Ezilla integrates the Cloud middleware, virtualization technology, and Web-based Operating System (WebOS) to form a virtual computer in distributed computing environment. In order to upgrade the dataset and sp...

  20. A primer on high-throughput computing for genomic selection.

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  1. A novel strategy to access high resolution DICOM medical images based on JPEG2000 interactive protocol

    Science.gov (United States)

    Tian, Yuan; Cai, Weihua; Sun, Jianyong; Zhang, Jianguo

    2008-03-01

    The demand for sharing medical information has kept rising. However, the transmission and displaying of high resolution medical images are limited if the network has a low transmission speed or the terminal devices have limited resources. In this paper, we present an approach based on JPEG2000 Interactive Protocol (JPIP) to browse high resolution medical images in an efficient way. We designed and implemented an interactive image communication system with client/server architecture and integrated it with Picture Archiving and Communication System (PACS). In our interactive image communication system, the JPIP server works as the middleware between clients and PACS servers. Both desktop clients and wireless mobile clients can browse high resolution images stored in PACS servers via accessing the JPIP server. The client can only make simple requests which identify the resolution, quality and region of interest and download selected portions of the JPEG2000 code-stream instead of downloading and decoding the entire code-stream. After receiving a request from a client, the JPIP server downloads the requested image from the PACS server and then responds the client by sending the appropriate code-stream. We also tested the performance of the JPIP server. The JPIP server runs stably and reliably under heavy load.

  2. High nitrogen availability reduces polyphenol content in Sphagnum peat.

    Science.gov (United States)

    Bragazza, Luca; Freeman, Chris

    2007-05-15

    Peat mosses of the genus Sphagnum constitute the bulk of living and dead biomass in bogs. These plants contain peculiar polyphenols which hamper litter peat decomposition through their inhibitory activity on microbial breakdown. In the light of the increasing availability of biologically active nitrogen in natural ecosystems, litter derived from Sphagnum mosses is an ideal substrate to test the potential effects of increased atmospheric nitrogen deposition on polyphenol content in litter peat. To this aim, we measured total nitrogen and soluble polyphenol concentration in Sphagnum litter peat collected in 11 European bogs under a chronic gradient of atmospheric nitrogen deposition. Our results demonstrate that increasing nitrogen concentration in Sphagnum litter, as a consequence of increased exogenous nitrogen availability, is accompanied by a decreasing concentration of polyphenols. This inverse relationship is consistent with reports that in Sphagnum mosses, polyphenol and protein biosynthesis compete for the same precursor. Our observation of modified Sphagnum litter chemistry under chronic nitrogen eutrophication has implications in the context of the global carbon balance, because a lower content of decay-inhibiting polyphenols would accelerate litter peat decomposition.

  3. HAVmS: Highly Available Virtual Machine Computer System Fault Tolerant with Automatic Failback and Close to Zero Downtime

    Directory of Open Access Journals (Sweden)

    Memmo Federici

    2014-12-01

    Full Text Available In scientic computing, systems often manage computations that require continuous acquisition of of satellite data and the management of large databases, as well as the execution of analysis software and simulation models (e.g. Monte Carlo or molecular dynamics cell simulations which may require several weeks of continuous run. These systems, consequently, should ensure the continuity of operation even in case of serious faults. HAVmS (High Availability Virtual machine System is a highly available, "fault tolerant" system with zero downtime in case of fault. It is based on the use of Virtual Machines and implemented by two servers with similar characteristics. HAVmS, thanks to the developed software solutions, is unique in its kind since it automatically failbacks once faults have been fixed. The system has been designed to be used both with professional or inexpensive hardware and supports the simultaneous execution of multiple services such as: web, mail, computing and administrative services, uninterrupted computing, data base management. Finally the system is cost effective adopting exclusively open source solutions, is easily manageable and for general use.

  4. An Exploration of Support Factors Available to Higher Education Students with High Functioning Autism or Asperger Syndrome

    Science.gov (United States)

    Rutherford, Emily N.

    2013-01-01

    This qualitative phenomenological research study used narrative inquiry to explore the support factors available to students with High Functioning Autism or Asperger Syndrome in higher education that contribute to their success as perceived by the students. Creswell's (2009) six step method for analyzing phenomenological studies was used to…

  5. Effects of High Availability Fuels on Combustor Properties

    Science.gov (United States)

    1978-01-01

    and followed anticipated trends. In a few casC. the measurements were anomalous but these were attributed to changes in the flame length as flow...high viscosity and end point will cause the flame length to increase (slow heat release) by increasing the burning time. Although this will increase the...considered to be semi-quantitative due to the limited viewing angle of the radiometer and the variation of flame length with inlet conditions. It was

  6. Difficulties with True Interoperability in Modeling & Simulation

    Science.gov (United States)

    2011-12-01

    Standards in M&S cover multiple layers of technical abstraction. There are middleware specifica- tions, such as the High Level Architecture (HLA) ( IEEE Xplore ... IEEE Xplore Digital Library. 2010. 1516-2010 IEEE Standard for Modeling and Simulation (M&S) High Level Architecture (HLA) – Framework and Rules...using different communication protocols being able to allow da- 2642978-1-4577-2109-0/11/$26.00 ©2011 IEEE Report Documentation Page Form ApprovedOMB No

  7. Towards Application Portability on Blockchains

    OpenAIRE

    Shudo, Kazuyuki; Saito, Kenji

    2018-01-01

    We pose a fundamental problem of public blockchain, "incentive mismatch." It is an open problem, but application portability is a provisional solution to the problem. Portability is also a desirable property for an application on a private blockchain. It is not even clear to be able to define a common API for various blockchain middlewares, but it is possible to improve portability by reducing dependency on a blockchain. We present an example of such middleware designs that provide applicatio...

  8. Mobile Agent based Market Basket Analysis on Cloud

    OpenAIRE

    Waghmare, Vijayata; Mukhopadhyay, Debajyoti

    2014-01-01

    This paper describes the design and development of a location-based mobile shopping application for bakery product shops. Whole application is deployed on cloud. The three-tier architecture consists of, front-end, middle-ware and back-end. The front-end level is a location-based mobile shopping application for android mobile devices, for purchasing bakery products of nearby places. Front-end level also displays association among the purchased products. The middle-ware level provides a web ser...

  9. Querying Large Physics Data Sets Over an Information Grid

    CERN Document Server

    Baker, N; Kovács, Z; Le Goff, J M; McClatchey, R

    2001-01-01

    Optimising use of the Web (WWW) for LHC data analysis is a complex problem and illustrates the challenges arising from the integration of and computation across massive amounts of information distributed worldwide. Finding the right piece of information can, at times, be extremely time-consuming, if not impossible. So-called Grids have been proposed to facilitate LHC computing and many groups have embarked on studies of data replication, data migration and networking philosophies. Other aspects such as the role of 'middleware' for Grids are emerging as requiring research. This paper positions the need for appropriate middleware that enables users to resolve physics queries across massive data sets. It identifies the role of meta-data for query resolution and the importance of Information Grids for high-energy physics analysis rather than just Computational or Data Grids. This paper identifies software that is being implemented at CERN to enable the querying of very large collaborating HEP data-sets, initially...

  10. Architectural Analysis of Systems Based on the Publisher-Subscriber Style

    Science.gov (United States)

    Ganesun, Dharmalingam; Lindvall, Mikael; Ruley, Lamont; Wiegand, Robert; Ly, Vuong; Tsui, Tina

    2010-01-01

    Architectural styles impose constraints on both the topology and the interaction behavior of involved parties. In this paper, we propose an approach for analyzing implemented systems based on the publisher-subscriber architectural style. From the style definition, we derive a set of reusable questions and show that some of them can be answered statically whereas others are best answered using dynamic analysis. The paper explains how the results of static analysis can be used to orchestrate dynamic analysis. The proposed method was successfully applied on the NASA's Goddard Mission Services Evolution Center (GMSEC) software product line. The results show that the GMSEC has a) a novel reusable vendor-independent middleware abstraction layer that allows the NASA's missions to configure the middleware of interest without changing the publishers' or subscribers' source code, and b) some high priority bugs due to behavioral discrepancies, which were eluded during testing and code reviews, among different implementations of the same APIs for different vendors.

  11. Tactical Application of Gaming Technologies for Improved Battlespace Management

    Science.gov (United States)

    2007-01-01

    the Digital Scene Matching Area Correlation (DSMAC) and the Global Positioning Satellite (GPS) System are coupled to the guidance systems to...Game Engine technology is driven by a huge market of consumers and the technology continues to improve each year. Commercially available Game...has largely been due to the emergence of a new class of middleware called “physics engines”. Used in games such as Gran Turismo 4 (GT4), these

  12. Proposed prediction algorithms based on hybrid approach to deal with anomalies of RFID data in healthcare

    Directory of Open Access Journals (Sweden)

    A. Anny Leema

    2013-07-01

    Full Text Available The RFID technology has penetrated the healthcare sector due to its increased functionality, low cost, high reliability, and easy-to-use capabilities. It is being deployed for various applications and the data captured by RFID readers increase according to timestamp resulting in an enormous volume of data duplication, false positive, and false negative. The dirty data stream generated by the RFID readers is one of the main factors limiting the widespread adoption of RFID technology. In order to provide reliable data to RFID application, it is necessary to clean the collected data and this should be done in an effective manner before they are subjected to warehousing. The existing approaches to deal with anomalies are physical, middleware, and deferred approach. The shortcomings of existing approaches are analyzed and found that robust RFID system can be built by integrating the middleware and deferred approach. Our proposed algorithms based on hybrid approach are tested in the healthcare environment which predicts false positive, false negative, and redundant data. In this paper, healthcare environment is simulated using RFID and the data observed by RFID reader consist of anomalies false positive, false negative, and duplication. Experimental evaluation shows that our cleansing methods remove errors in RFID data more accurately and efficiently. Thus, with the aid of the planned data cleaning technique, we can bring down the healthcare costs, optimize business processes, streamline patient identification processes, and improve patient safety.

  13. File-based replica management

    CERN Document Server

    Kunszt, Peter Z; Stockinger, Heinz; Stockinger, Kurt

    2005-01-01

    Data replication is one of the best known strategies to achieve high levels of availability and fault tolerance, as well as minimal access times for large, distributed user communities using a world-wide Data Grid. In certain scientific application domains, the data volume can reach the order of several petabytes; in these domains, data replication and access optimization play an important role in the manageability and usability of the Grid. In this paper, we present the design and implementation of a replica management Grid middleware that was developed within the EDG project left bracket European Data Grid Project (EDG), http://www.eu-egee.org right bracket and is designed to be extensible so that user communities can adjust its detailed behavior according to their QoS requirements.

  14. Acropolis: A Fast Protoyping Robotic Application

    Directory of Open Access Journals (Sweden)

    Vincent Zalzal

    2009-03-01

    Full Text Available Acropolis is an open source middleware robotic framework for fast software prototyping and reuse of program codes. It is made up of a core software and a collection of several extension modules called plugins. Each plugin encapsulates a specific functionality needed for robotic applications. To design a robot behavior, a circuit of the involved plugins is built with a graphical user interface. A high degree of decoupling between components and a graph-based representation allow the user to build complex robot behaviors with minimal need for code writing. In addition, the Acropolis core is hardware platform independent. Well-known design patterns and layered software architecture are its key features. Through the description of three applications, we illustrate some of its usability.

  15. Machine protection: availability for particle accelerators

    International Nuclear Information System (INIS)

    Apollonio, A.

    2015-01-01

    Machine availability is a key indicator for the performance of the next generation of particle accelerators. Availability requirements need to be carefully considered during the design phase to achieve challenging objectives in different fields, as e.g. particle physics and material science. For existing and future High-Power facilities, such as ESS (European Spallation Source) and HL-LHC (High-Luminosity LHC), operation with unprecedented beam power requires highly dependable Machine Protection Systems (MPS) to avoid any damage-induced downtime. Due to the high complexity of accelerator systems, finding the optimal balance between equipment safety and accelerator availability is challenging. The MPS architecture, as well as the choice of electronic components, have a large influence on the achievable level of availability. In this thesis novel methods to address the availability of accelerators and their protection systems are presented. Examples of studies related to dependable MPS architectures are given in the thesis, both for Linear accelerators (Linac4, ESS) and circular particle colliders (LHC and HL-LHC). A study of suitable architectures for interlock systems of future availability-critical facilities is presented. Different methods have been applied to assess the anticipated levels of accelerator availability. The thesis presents the prediction of the performance (integrated luminosity for a particle collider) of LHC and future LHC up- grades, based on a Monte Carlo model that allows reproducing a realistic timeline of LHC operation. This model does not only account for the contribution of MPS, but extends to all systems relevant for LHC operation. Results are extrapolated to LHC run 2, run 3 and HL-LHC to derive individual system requirements, based on the target integrated luminosity. (author)

  16. Privacy-by-Design Framework for Assessing Internet of Things Applications and Platforms

    OpenAIRE

    Perera , Charith; Mccormick , Ciaran; Bandara , Arosha K.; Price , Blaine A.; Nuseibeh , Bashar

    2016-01-01

    International audience; The Internet of Things (IoT) systems are designed and developed either as standalone applications from the ground-up or with the help of IoT middleware platforms. They are designed to support different kinds of scenarios, such as smart homes and smart cities. Thus far, privacy concerns have not been explicitly considered by IoT applications and middleware platforms. This is partly due to the lack of systematic methods for designing privacy that can guide the software d...

  17. Development of a High-Fidelity Simulation Environment for Shadow-Mode Assessments of Air Traffic Concepts

    Science.gov (United States)

    Robinson, John E., III; Lee, Alan; Lai, Chok Fung

    2017-01-01

    This paper describes the Shadow-Mode Assessment Using Realistic Technologies for the National Airspace System (SMART-NAS) Test Bed. The SMART-NAS Test Bed is an air traffic simulation platform being developed by the National Aeronautics and Space Administration (NASA). The SMART-NAS Test Bed's core purpose is to conduct high-fidelity, real-time, human-in-the-loop and automation-in-the-loop simulations of current and proposed future air traffic concepts for the United States' Next Generation Air Transportation System called NextGen. The setup, configuration, coordination, and execution of realtime, human-in-the-loop air traffic management simulations are complex, tedious, time intensive, and expensive. The SMART-NAS Test Bed framework is an alternative to the current approach and will provide services throughout the simulation workflow pipeline to help alleviate these shortcomings. The principle concepts to be simulated include advanced gate-to-gate, trajectory-based operations, widespread integration of novel aircraft such as unmanned vehicles, and real-time safety assurance technologies to enable autonomous operations. To make this possible, SNTB will utilize Web-based technologies, cloud resources, and real-time, scalable, communication middleware. This paper describes the SMART-NAS Test Bed's vision, purpose, its concept of use, and the potential benefits, key capabilities, high-level requirements, architecture, software design, and usage.

  18. The Availability and Utilization of School Library Resources in Some Selected Secondary Schools (High School) in Rivers State

    Science.gov (United States)

    Owate, C. N.; Iroha, Okpa

    2013-01-01

    This study investigates the availability and utilization of school library resources by Secondary School (High School) Students. Eight Selected Secondary Schools in Rivers State, Nigeria were chosen based on their performance in external examinations and geographic locations. In carrying out the research, questionnaires were administered to both…

  19. Using ESB and BPEL for Evolving Healthcare Systems Towards Pervasive, Grid-Enabled SOA

    Science.gov (United States)

    Koufi, V.; Malamateniou, F.; Papakonstantinou, D.; Vassilacopoulos, G.

    Healthcare organizations often face the challenge of integrating diverse and geographically disparate information technology systems to respond to changing requirements and to exploit the capabilities of modern technologies. Hence, systems evolution, through modification and extension of the existing information technology infrastructure, becomes a necessity. Moreover, the availability of these systems at the point of care when needed is a vital issue for the quality of healthcare provided to patients. This chapter takes a process perspective of healthcare delivery within and across organizational boundaries and presents a disciplined approach for evolving healthcare systems towards a pervasive, grid-enabled service-oriented architecture using the enterprise system bus middleware technology for resolving integration issues, the business process execution language for supporting collaboration requirements and grid middleware technology for both addressing common SOA scalability requirements and complementing existing system functionality. In such an environment, appropriate security mechanisms must ensure authorized access to integrated healthcare services and data. To this end, a security framework addressing security aspects such as authorization and access control is also presented.

  20. Dynamic Reconfiguration of Security Policies in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mónica Pinto

    2015-03-01

    Full Text Available Providing security and privacy to wireless sensor nodes (WSNs is very challenging, due to the heterogeneity of sensor nodes and their limited capabilities in terms of energy, processing power and memory. The applications for these systems run in a myriad of sensors with different low-level programming abstractions, limited capabilities and different routing protocols. This means that applications for WSNs need mechanisms for self-adaptation and for self-protection based on the dynamic adaptation of the algorithms used to provide security. Dynamic software product lines (DSPLs allow managing both variability and dynamic software adaptation, so they can be considered a key technology in successfully developing self-protected WSN applications. In this paper, we propose a self-protection solution for WSNs based on the combination of the INTER-TRUST security framework (a solution for the dynamic negotiation and deployment of security policies and the FamiWare middleware (a DSPL approach to automatically configure and reconfigure instances of a middleware for WSNs.We evaluate our approach using a case study from the intelligent transportation system domain.

  1. ALICE: ARC integration

    CERN Document Server

    Anderlik, C; Kleist, J; Peters, A; Saiz, P

    2008-01-01

    AliEn or Alice Environment is the Grid middleware developed and used within the ALICE collaboration for storing and processing data in a distributed manner. ARC (Advanced Resource Connector) is the Grid middleware deployed across the Nordic countries and gluing together the resources within the Nordic Data Grid Facility (NDGF). In this paper we will present our approach to integrate AliEn and ARC, in the sense that ALICE data management and job processing can be carried out on the NDGF infrastructure, using the client tools available in AliEn. The inter-operation has two aspects, one is the data management part and the second the job management aspect. The first aspect was solved by using dCache across NDGF to handle data. Therefore, we will concentrate on the second part. Solving it, was somewhat cumbersome, mainly due to the different computing models employed by AliEn and ARC. AliEN uses an Agent based pull model while ARC handles jobs through the more 'traditional' push model. The solution comes as a modu...

  2. AIMES Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Katz, Daniel S [Univ. of Illinois, Urbana-Champaign, IL (United States). National Center for Supercomputing Applications (NCSA); Jha, Shantenu [Rutgers Univ., New Brunswick, NJ (United States); Weissman, Jon [Univ. of Minnesota, Minneapolis, MN (United States); Turilli, Matteo [Rutgers Univ., New Brunswick, NJ (United States)

    2017-01-31

    This is the final technical report for the AIMES project. Many important advances in science and engineering are due to large-scale distributed computing. Notwithstanding this reliance, we are still learning how to design and deploy large-scale production Distributed Computing Infrastructures (DCI). This is evidenced by missing design principles for DCI, and an absence of generally acceptable and usable distributed computing abstractions. The AIMES project was conceived against this backdrop, following on the heels of a comprehensive survey of scientific distributed applications. AIMES laid the foundations to address the tripartite challenge of dynamic resource management, integrating information, and portable and interoperable distributed applications. Four abstractions were defined and implemented: skeleton, resource bundle, pilot, and execution strategy. The four abstractions were implemented into software modules and then aggregated into the AIMES middleware. This middleware successfully integrates information across the application layer (skeletons) and resource layer (Bundles), derives a suitable execution strategy for the given skeleton and enacts its execution by means of pilots on one or more resources, depending on the application requirements, and resource availabilities and capabilities.

  3. Low-Cost Solutions Using the Infrastructure as a Service with High Availability and Virtualization Model

    Directory of Open Access Journals (Sweden)

    Cesar Armando Moreira Zambrano

    2017-02-01

    Full Text Available This paper presents the results obtained from the implementation of an infrastructure to improve technological services of email, virtual learning environment, digital repository and virtual library at the Polytechnic Agricultural Higher School of Manabí (Polytechnic School of Agriculture of Manabí, ESPAM, through the use of high availability and virtualization mechanisms to provide more reliable resources. Virtualization is an empowering and cutting-edge technology that is transforming the operation of technological services, but it involves a paradigm shift in serviceoriented information technologies and cloud computing. To execute each of the processes the V-cycle methodology was used as a strategy. Virtualization services empowers companies and institutions by transforming how they operate to be at the forefront of innovation in their services as a technological solution. So the implementation of redundant technology in the ESPAM, has allowed its technological services are always operative, for the benefit of the university community, because if there were failures in the main system or services, the backups will be enabled quickly allowing the systems come into operation immediately.

  4. Experience and Lessons learnt from running high availability databases on Network Attached Storage

    CERN Document Server

    Guijarro, Juan Manuel; Segura Chinchilla, Nilo

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department provides the Oracle based Central Data Base services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been set up. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS as share disk space for RAC purposes and Data hosting. It is composed of two private LAN's to provide access to the NAS file servers and Oracle RAC interconnect, both using network bonding. NAS nodes are configured in partnership to prevent having single points of failure and to provide automatic NAS fail-over. This presentation describes that infrastructure and gives some advice on how to automate its management and setup using a Fabric Management framework such as Quattor. It also covers aspects related with NAS Performance and Monitoring as well Data Backup and Archive of such facility using already existing i...

  5. Styx Grid Services: Lightweight Middleware for Efficient Scientific Workflows

    Directory of Open Access Journals (Sweden)

    J.D. Blower

    2006-01-01

    Full Text Available The service-oriented approach to performing distributed scientific research is potentially very powerful but is not yet widely used in many scientific fields. This is partly due to the technical difficulties involved in creating services and workflows and the inefficiency of many workflow systems with regard to handling large datasets. We present the Styx Grid Service, a simple system that wraps command-line programs and allows them to be run over the Internet exactly as if they were local programs. Styx Grid Services are very easy to create and use and can be composed into powerful workflows with simple shell scripts or more sophisticated graphical tools. An important feature of the system is that data can be streamed directly from service to service, significantly increasing the efficiency of workflows that use large data volumes. The status and progress of Styx Grid Services can be monitored asynchronously using a mechanism that places very few demands on firewalls. We show how Styx Grid Services can interoperate with with Web Services and WS-Resources using suitable adapters.

  6. Effects of High Dissolved Inorganic and Organic Carbon Availability on the Physiology of the Hard Coral Acropora millepora from the Great Barrier Reef.

    Directory of Open Access Journals (Sweden)

    Friedrich W Meyer

    Full Text Available Coral reefs are facing major global and local threats due to climate change-induced increases in dissolved inorganic carbon (DIC and because of land-derived increases in organic and inorganic nutrients. Recent research revealed that high availability of labile dissolved organic carbon (DOC negatively affects scleractinian corals. Studies on the interplay of these factors, however, are lacking, but urgently needed to understand coral reef functioning under present and near future conditions. This experimental study investigated the individual and combined effects of ambient and high DIC (pCO2 403 μatm/ pHTotal 8.2 and 996 μatm/pHTotal 7.8 and DOC (added as Glucose 0 and 294 μmol L-1, background DOC concentration of 83 μmol L-1 availability on the physiology (net and gross photosynthesis, respiration, dark and light calcification, and growth of the scleractinian coral Acropora millepora (Ehrenberg, 1834 from the Great Barrier Reef over a 16 day interval. High DIC availability did not affect photosynthesis, respiration and light calcification, but significantly reduced dark calcification and growth by 50 and 23%, respectively. High DOC availability reduced net and gross photosynthesis by 51% and 39%, respectively, but did not affect respiration. DOC addition did not influence calcification, but significantly increased growth by 42%. Combination of high DIC and high DOC availability did not affect photosynthesis, light calcification, respiration or growth, but significantly decreased dark calcification when compared to both controls and DIC treatments. On the ecosystem level, high DIC concentrations may lead to reduced accretion and growth of reefs dominated by Acropora that under elevated DOC concentrations will likely exhibit reduced primary production rates, ultimately leading to loss of hard substrate and reef erosion. It is therefore important to consider the potential impacts of elevated DOC and DIC simultaneously to assess real world

  7. The energy aware smart home

    OpenAIRE

    Jahn, M.; Jentsch, M.; Prause, C.R.; Pramudianto, F.; Al-Akkad, A.; Reiners, R.

    2010-01-01

    In this paper, we present a novel smart home system integrating energy efficiency features. The smart home application is built on top of Hydra, a middleware framework that facilitates the intelligent communication of heterogeneous embedded devices through an overlay P2P network. We interconnect common devices available in private households and integrate wireless power metering plugs to gain access to energy consumption data. These data are used for monitoring and analyzing consumed energy o...

  8. An E-government Interoperability Platform Supporting Personal Data Protection Regulations

    OpenAIRE

    González, Laura; Echevarría, Andrés; Morales, Dahiana; Ruggia, Raúl

    2016-01-01

    Public agencies are increasingly required to collaborate with each other in order to provide high-quality e-government services. This collaboration is usually based on the service-oriented approach and supported by interoperability platforms. Such platforms are specialized middleware-based infrastructures enabling the provision, discovery and invocation of interoperable software services. In turn, given that personal data handled by governments are often very sensitive, most governments have ...

  9. SmartApps: Middle-ware for Adaptive Applications on Reconfigurable Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence Rauchwerger

    2009-08-09

    performance and the available resources to determine if, and by how much, performance could be improved if the application was restructured. Then, if the potential performance benefit outweighs the projected overhead costs, the application will restructure itself and the underlying system accordingly. The SAS framework includes performance monitoring and modeling components and mechanisms for performing the actual restructuring at various levels including: (i) algorithmic adaptation, (ii) run-time software optimization (e.g., input sensitivity analysis, etc.), (iii) tuning reconfigurable OS services (scheduling policy, page size, etc), and (iv) system configuration (e.g., selecting which, and how many, computational resources to use). SmartApps is being developed in the STAPL infrastructure. STAPL (the Standard Template Adaptive Parallel Library) is a framework for developing highly-optimizable, adaptable, and portable parallel and distributed applications. It consists of a relatively new and still evolving collection of generic parallel algorithms and distributed containers and a run-time system (RTS) through which the application and compiler interact with the OS and hardware.

  10. Design and management of public health outreach using interoperable mobile multimedia: an analysis of a national winter weather preparedness campaign

    Directory of Open Access Journals (Sweden)

    Cesar Bandera

    2016-05-01

    Full Text Available Abstract Background The Office of Public Health Preparedness and Response (OPHPR in the Centers for Disease Control and Prevention conducts outreach for public preparedness for natural and manmade incidents. In 2011, OPHPR conducted a nationwide mobile public health (m-Health campaign that pushed brief videos on preparing for severe winter weather onto cell phones, with the objective of evaluating the interoperability of multimedia m-Health outreach with diverse cell phones (including handsets without Internet capability, carriers, and user preferences. Methods Existing OPHPR outreach material on winter weather preparedness was converted into mobile-ready multimedia using mobile marketing best practices to improve audiovisual quality and relevance. Middleware complying with opt-in requirements was developed to push nine bi-weekly multimedia broadcasts onto subscribers’ cell phones, and OPHPR promoted the campaign on its web site and to subscribers on its govdelivery.com notification platform. Multimedia, text, and voice messaging activity to/from the middleware was logged and analyzed. Results Adapting existing media into mobile video was straightforward using open source and commercial software, including web pages, PDF documents, and public service announcements. The middleware successfully delivered all outreach videos to all participants (a total of 504 videos regardless of the participant’s device. 54 % of videos were viewed on cell phones, 32 % on computers, and 14 % were retrieved by search engine web crawlers. 21 % of participating cell phones did not have Internet access, yet still received and displayed all videos. The time from media push to media viewing on cell phones was half that of push to viewing on computers. Conclusions Video delivered through multimedia messaging can be as interoperable as text messages, while providing much richer information. This may be the only multimedia mechanism available to outreach campaigns

  11. Condom availability in high risk places and condom use: a study at district level in Kenya, Tanzania and Zambia

    Directory of Open Access Journals (Sweden)

    Sandøy Ingvild

    2012-11-01

    Full Text Available Abstract Background A number of studies from countries with severe HIV epidemics have found gaps in condom availability, even in places where there is a substantial potential for HIV transmission. Although reported condom use has increased in many African countries, there are often big differences by socioeconomic background. The aim of this study was to assess equity aspects of condom availability and uptake in three African districts to evaluate whether condom programmes are given sufficient priority. Methods Data on condom availability and use was examined in one district in Kenya, one in Tanzania and one in Zambia. The study was based on a triangulation of data collection methods in the three study districts: surveys in venues where people meet new sexual partners, population-based surveys and focus group discussions. The data was collected within an overall study on priority setting in health systems. Results At the time of the survey, condoms were observed in less than half of the high risk venues in two of the three districts and in 60% in the third district. Rural respondents in the population-based surveys perceived condoms to be less available and tended to be less likely to report condom use than urban respondents. Although focus group participants reported that condoms were largely available in their district, they expressed concerns related to the accessibility of free condoms. Conclusion As late as thirty years into the HIV epidemic there are still important gaps in the availability of condoms in places where people meet new sexual partners in these three African districts. Considering that previous studies have found that improved condom availability and accessibility in high risk places have a potential to increase condom use among people with multiple partners, the present study findings indicate that substantial further efforts should be made to secure that condoms are easily accessible in places where sexual relationships are

  12. TIDE: Lightweight Device Composition for Enhancing Tabletop Environments with Smartphone Applications

    DEFF Research Database (Denmark)

    Sicard, Leo; Tabard, Aurelien; Ramos, Juan David Hincapie

    2013-01-01

    platforms have to be re-developed. At the same time, smartphones are pervasive computers that users carry around and with a large pool of applications. This paper presents TIDE, a lightweight device composition middleware to bring existing smartphone applica- tions onto the tabletop. Through TIDE......, applications running on the smartphone are displayed on the tabletop computer, and users can interact with them through the tabletop’s interactive surface. TIDE contributes to the areas of device compo- sition and tabletops by providing an OS-level middleware that is transparent to the smartphone applications...

  13. Development of a virtual research environment in ITBL project

    Energy Technology Data Exchange (ETDEWEB)

    Kenji, Higuchi; Takayuki, Otani; Yukihiro, Hasegawa; Yoshio, Suzuki; Nobuhiro, Yamagishi; Kazuyuki, Kimura; Tetsuo, Aoyagi; Norihiro, Nakajima; Masahiro, Fukuda [Japan Atomic Energy Research Institute (Japan); Toshiyuki, Imamura [University of Electro-Communications (Japan); Genki, Yagawa [Tokyo University (Japan)

    2003-07-01

    With the progress of computers and high-speed networks, it becomes possible to perform research work efficiently by combining computing, data and experimental resources which are widely distributed over multi-sites, or by sharing information among collaborators who belong to different organizations. An experimental application of Grid computing was executed in ITBL (information technology based laboratory) project promoted by six member institutes of MEXT (ministry of education, culture, sports, sciences and technology). Key technologies that are indispensable for construction of virtual organization were implemented onto ITBL Middle-ware and examined in the experiment from a view point of availability. It seems that successful result in the implementation and examination of those technologies such as security infrastructure, component programming and collaborative visualization in practical computer/network systems means significant progress in Science Grid in Japan.

  14. Development of a virtual research environment in ITBL project

    International Nuclear Information System (INIS)

    Kenji, Higuchi; Takayuki, Otani; Yukihiro, Hasegawa; Yoshio, Suzuki; Nobuhiro, Yamagishi; Kazuyuki, Kimura; Tetsuo, Aoyagi; Norihiro, Nakajima; Masahiro, Fukuda; Toshiyuki, Imamura; Genki, Yagawa

    2003-01-01

    With the progress of computers and high-speed networks, it becomes possible to perform research work efficiently by combining computing, data and experimental resources which are widely distributed over multi-sites, or by sharing information among collaborators who belong to different organizations. An experimental application of Grid computing was executed in ITBL (information technology based laboratory) project promoted by six member institutes of MEXT (ministry of education, culture, sports, sciences and technology). Key technologies that are indispensable for construction of virtual organization were implemented onto ITBL Middle-ware and examined in the experiment from a view point of availability. It seems that successful result in the implementation and examination of those technologies such as security infrastructure, component programming and collaborative visualization in practical computer/network systems means significant progress in Science Grid in Japan

  15. A Proposal for Production Data Collection on a Hybrid Production Line in Cooperation with MES

    Directory of Open Access Journals (Sweden)

    Znamenák Jaroslav

    2016-12-01

    Full Text Available Due to the increasing competitive environment in the manufacturing sector, many industries have the need for a computer integrated engineering management system. The Manufacturing Execution System (MES is a computer system designed for product manufacturing with high quality, low cost and minimum lead time. MES is a type of middleware providing the required information for the optimization of production from launching of a product order to its completion. There are many studies dealing with the advantages of the use of MES, but little research was conducted on how to implement MES effectively. A solution to this issue are KPIs. KPIs are important to many strategic philosophies or practices for improving the production process. This paper describes a proposal for analyzing manufacturing system parameters with the use of KPIs.

  16. Restrictions in Availability of Drugs Used for Suicide

    DEFF Research Database (Denmark)

    Nordentoft, Merete

    2007-01-01

    Availability of drugs with high lethality has been hypothesized to increase the risk of self-poisoning suicides. A literature search concerning deliberate self-poisoning and the effect of restricting access to drugs was conducted, and the effect of restrictions in availability of barbiturates, tr...... in availability of drugs with high case fatality should be a part of suicide prevention strategies.......Availability of drugs with high lethality has been hypothesized to increase the risk of self-poisoning suicides. A literature search concerning deliberate self-poisoning and the effect of restricting access to drugs was conducted, and the effect of restrictions in availability of barbiturates......, tricyclic antidepressants, dextropropoxyphene, and weak analgesics was reviewed. The correlations between method-specific and overall suicide rates and sales figures for barbiturates, dextropropoxyphene, weak analgesics, and tricyclic antidepressants were reviewed. It is concluded that restriction...

  17. CDF GlideinWMS usage in Grid computing of high energy physics

    International Nuclear Information System (INIS)

    Zvada, Marian; Sfiligoi, Igor; Benjamin, Doug

    2010-01-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  18. Condom availability in high risk places and condom use: a study at district level in Kenya, Tanzania and Zambia.

    Science.gov (United States)

    Sandøy, Ingvild Fossgard; Blystad, Astrid; Shayo, Elizabeth H; Makundi, Emmanuel; Michelo, Charles; Zulu, Joseph; Byskov, Jens

    2012-11-26

    A number of studies from countries with severe HIV epidemics have found gaps in condom availability, even in places where there is a substantial potential for HIV transmission. Although reported condom use has increased in many African countries, there are often big differences by socioeconomic background. The aim of this study was to assess equity aspects of condom availability and uptake in three African districts to evaluate whether condom programmes are given sufficient priority. Data on condom availability and use was examined in one district in Kenya, one in Tanzania and one in Zambia. The study was based on a triangulation of data collection methods in the three study districts: surveys in venues where people meet new sexual partners, population-based surveys and focus group discussions. The data was collected within an overall study on priority setting in health systems. At the time of the survey, condoms were observed in less than half of the high risk venues in two of the three districts and in 60% in the third district. Rural respondents in the population-based surveys perceived condoms to be less available and tended to be less likely to report condom use than urban respondents. Although focus group participants reported that condoms were largely available in their district, they expressed concerns related to the accessibility of free condoms. As late as thirty years into the HIV epidemic there are still important gaps in the availability of condoms in places where people meet new sexual partners in these three African districts. Considering that previous studies have found that improved condom availability and accessibility in high risk places have a potential to increase condom use among people with multiple partners, the present study findings indicate that substantial further efforts should be made to secure that condoms are easily accessible in places where sexual relationships are initiated. Although condom distribution in drinking places has been

  19. Water Availability as a Measure of Cellulose Hydrolysis Efficiency

    DEFF Research Database (Denmark)

    Hsieh, Chia-Wen

    of sugars, salts, and surfactants impact the water relaxation time. Systems with high concentrations of sugars and salts tend to have low water availability, as these form strong interactions with water to keep their solubility, leaving less water available for hydrolysis. Thus, cellulase performance...... decreases. However, the addition of surfactants such as polyethylene glycol (PEG) increases the water mobility, leading to higher water availability, and ultimately higher glucose production. More specifically, the higher water availability boosts the activity of processive cellulases. Thus, water...... availability is vital for efficient hydrolysis, especially at high dry matter content where water availability is low. At high dry matter content, cellulase activity changes water interactions with biomass, affecting the water mobility. While swelling and fiber loosening also take place during hydrolysis...

  20. Radon availability in New Mexico

    International Nuclear Information System (INIS)

    McLemore, V.T.

    1995-01-01

    The New Mexico Bureau of Mines and Mineral Resources (NMBMMR) in cooperation with the Radiation Licensing and Registration Section of the New Mexico Environment Department (NMED) and the US Environmental Protection Agency (EPA) have been evaluating geologic and soil conditions that may contribute to elevated levels of indoor radon throughout New Mexico. Various data have been integrated and interpreted in order to determine areas of high radon availability. The purpose of this paper is to summarize some of these data for New Mexico and to discuss geologic controls on the distribution of radon. Areas in New Mexico have been identified from these data as having a high radon availability. It is not the intent of this report to alarm the public, but to provide data on the distribution of radon throughout New Mexico

  1. Differences in home food availability of high- and low-fat foods after a behavioral weight control program are regional not racial

    Directory of Open Access Journals (Sweden)

    West Delia

    2010-09-01

    Full Text Available Abstract Background Few studies, if any, have examined the impact of a weight control program on the home food environment in a diverse sample of adults. Understanding and changing the availability of certain foods in the home and food storage practices may be important for creating healthier home food environments and supporting effective weight management. Methods Overweight adults (n = 90; 27% African American enrolled in a 6-month behavioral weight loss program in Vermont and Arkansas. Participants were weighed and completed measures of household food availability and food storage practices at baseline and post-treatment. We examined baseline differences and changes in high-fat food availability, low-fat food availability and the storage of foods in easily visible locations, overall and by race (African American or white participants and region (Arkansas or Vermont. Results At post-treatment, the sample as a whole reported storing significantly fewer foods in visible locations around the house (-0.5 ± 2.3 foods, with no significant group differences. Both Arkansas African Americans (-1.8 ± 2.4 foods and Arkansas white participants (-1.8 ± 2.6 foods reported significantly greater reductions in the mean number of high-fat food items available in their homes post-treatment compared to Vermont white participants (-0.5 ± 1.3 foods, likely reflecting fewer high-fat foods reported in Vermont households at baseline. Arkansas African Americans lost significantly less weight (-3.6 ± 4.1 kg than Vermont white participants (-8.3 ± 6.8 kg, while Arkansas white participants did not differ significantly from either group in weight loss (-6.2 ± 6.0 kg. However, home food environment changes were not associated with weight changes in this study. Conclusions Understanding the home food environment and how best to measure it may be useful for both obesity treatment and understanding patterns of obesity prevalence and health disparity.

  2. 76 FR 37795 - Notice of Availability of Government-Owned Inventions; Available for Licensing

    Science.gov (United States)

    2011-06-28

    ... DEPARTMENT OF DEFENSE Department of the Navy Notice of Availability of Government-Owned Inventions....S. Patent No. 7,316,194: Rudders for High-Speed Ships//U.S. Patent No. 7,322,786: Mobile Loader for Transfer of Containers Between Delivery Vehicles and Marine Terminal Cranes//U.S. Patent No. 7,324,016...

  3. Reduced Availability of Sugar-Sweetened Beverages and Diet Soda Has a Limited Impact on Beverage Consumption Patterns in Maine High School Youth

    Science.gov (United States)

    Whatley Blum, Janet E.; Davee, Anne-Marie; Beaudoin, Christina M.; Jenkins, Paul L.; Kaley, Lori A.; Wigand, Debra A.

    2008-01-01

    Objective: To examine change in high school students' beverage consumption patterns pre- and post-intervention of reduced availability of sugar-sweetened beverages (SSB) and diet soda in school food venues. Design: A prospective, quasi-experimental, nonrandomized study design. Setting: Public high schools. Participants: A convenience sample from…

  4. Zorilla: A P2P Middleware for Real-World Distributed Systems

    NARCIS (Netherlands)

    Drost, N.; van Nieuwpoort, R.V.; Maassen, J.; Seinstra, F.J.; Bal, H.E.

    2011-01-01

    The inherent complex nature of current distributed computing architectures hinders the widespread adoption of these systems for mainstream use. In general, users have access to a highly heterogeneous set of compute resources, which may include clusters, grids, desktop grids, clouds, and other

  5. Gluten-Free Foods in Rural Maritime Provinces: Limited Availability, High Price, and Low Iron Content.

    Science.gov (United States)

    Jamieson, Jennifer A; Gougeon, Laura

    2017-12-01

    We investigated the price difference between gluten-free (GF) and gluten-containing (GC) foods available in rural Maritime stores. GF foods and comparable GC items were sampled through random visits to 21 grocery stores in nonurban areas of Nova Scotia, New Brunswick, and Prince Edward Island, Canada. Wilcoxon rank tests were conducted on price per 100 g of product, and on the price relative to iron content; 2226 GF foods (27.2% staple items, defined as breads, cereals, flours, and pastas) and 1625 GC foods were sampled, with an average ± SD of 66 ± 2.7 GF items per store in rural areas and 331 ± 12 in towns. The median price of GF items ($1.76/100 g) was more expensive than GC counterparts ($1.05/100 g) and iron density was approximately 50% less. GF staple foods were priced 5% higher in rural stores than in town stores. Although the variety of GF products available to consumers has improved, higher cost and lower nutrient density remain issues in nonurban Maritime regions. Dietitians working in nonurban areas should consider the relative high price, difficult access, and low iron density of key GF items, and work together with clients to find alternatives and enhance their food literacy.

  6. Generic Software Architecture for Launchers

    Science.gov (United States)

    Carre, Emilien; Gast, Philippe; Hiron, Emmanuel; Leblanc, Alain; Lesens, David; Mescam, Emmanuelle; Moro, Pierre

    2015-09-01

    The definition and reuse of generic software architecture for launchers is not so usual for several reasons: the number of European launcher families is very small (Ariane 5 and Vega for these last decades); the real time constraints (reactivity and determinism needs) are very hard; low levels of versatility are required (implying often an ad hoc development of the launcher mission). In comparison, satellites are often built on a generic platform made up of reusable hardware building blocks (processors, star-trackers, gyroscopes, etc.) and reusable software building blocks (middleware, TM/TC, On Board Control Procedure, etc.). If some of these reasons are still valid (e.g. the limited number of development), the increase of the available CPU power makes today an approach based on a generic time triggered middleware (ensuring the full determinism of the system) and a centralised mission and vehicle management (offering more flexibility in the design and facilitating the long term maintenance) achievable. This paper presents an example of generic software architecture which could be envisaged for future launchers, based on the previously described principles and supported by model driven engineering and automatic code generation.

  7. A Survey on Intermediation Architectures for Underwater Robotics

    Directory of Open Access Journals (Sweden)

    Xin Li

    2016-02-01

    Full Text Available Currently, there is a plethora of solutions regarding interconnectivity and interoperability for networked robots so that they will fulfill their purposes in a coordinated manner. In addition to that, middleware architectures are becoming increasingly popular due to the advantages that they are capable of guaranteeing (hardware abstraction, information homogenization, easy access for the applications above, etc.. However, there are still scarce contributions regarding the global state of the art in intermediation architectures for underwater robotics. As far as the area of robotics is concerned, this is a major issue that must be tackled in order to get a holistic view of the existing proposals. This challenge is addressed in this paper by studying the most compelling pieces of work for this kind of software development in the current literature. The studied works have been assessed according to their most prominent features and capabilities. Furthermore, by studying the individual pieces of work and classifying them several common weaknesses have been revealed and are highlighted. This provides a starting ground for the development of a middleware architecture for underwater robotics capable of dealing with these issues.

  8. arcControlTower: the System for Atlas Production and Analysis on ARC

    International Nuclear Information System (INIS)

    Filipcic, Andrej

    2011-01-01

    PanDA, the Atlas management and distribution system for production and analysis jobs on EGEE and OSG clusters, is based on pilot jobs to increase the throughput and stability of the job execution on grid. The ARC middleware uses a specific approach which tightly connects the job requirements with cluster capabilities like resource usage, software availability and caching of input files. The pilot concept renders the ARC features useless. The arcControlTower is the job submission system which merges the pilot benefits and ARC advantages. It takes the pilot payload from the panda server and submits the jobs to the Nordugrid ARC clusters as regular jobs, with all the job resources known in advance. All the pilot communication with the PanDA server is done by the arcControlTower, so it plays the role of a pilot factory and the pilot itself. There are several advantages to this approach: no grid middleware is needed on the worker nodes, the fair-share between the production and user jobs is tuned with the arcControlTower load parameters, the jobs can be controlled by ARC client tools. The system could be extended to other submission systems using central distribution.

  9. ALICE-ARC integration

    International Nuclear Information System (INIS)

    Anderlik, C; Gregersen, A R; Kleist, J; Peters, A; Saiz, P

    2008-01-01

    AliEn or Alice Environment is the Grid middleware developed and used within the ALICE collaboration for storing and processing data in a distributed manner. ARC (Advanced Resource Connector) is the Grid middleware deployed across the Nordic countries and gluing together the resources within the Nordic Data Grid Facility (NDGF). In this paper we will present our approach to integrate AliEn and ARC, in the sense that ALICE data management and job processing can be carried out on the NDGF infrastructure, using the client tools available in AliEn. The inter-operation has two aspects, one is the data management part and the second the job management aspect. The first aspect was solved by using dCache across NDGF to handle data. Therefore, we will concentrate on the second part. Solving it, was somewhat cumbersome, mainly due to the different computing models employed by AliEn and ARC. AliEN uses an Agent based pull model while ARC handles jobs through the more 'traditional' push model. The solution comes as a module implementing the functionalities necessary to achieve AliEn job submission and management to ARC enabled sites

  10. Method for the direct determination of available carbohydrates in low-carbohydrate products using high-performance anion exchange chromatography.

    Science.gov (United States)

    Ellingson, David; Potts, Brian; Anderson, Phillip; Burkhardt, Greg; Ellefson, Wayne; Sullivan, Darryl; Jacobs, Wesley; Ragan, Robert

    2010-01-01

    An improved method for direct determination of available carbohydrates in low-level products has been developed and validated for a low-carbohydrate soy infant formula. The method involves modification of an existing direct determination method to improve specificity, accuracy, detection levels, and run times through a more extensive enzymatic digestion to capture all available (or potentially available) carbohydrates. The digestion hydrolyzes all common sugars, starch, and starch derivatives down to their monosaccharide components, glucose, fructose, and galactose, which are then quantitated by high-performance anion-exchange chromatography with photodiode array detection. Method validation consisted of specificity testing and 10 days of analyzing various spike levels of mixed sugars, maltodextrin, and corn starch. The overall RSD was 4.0% across all sample types, which contained within-day and day-to-day components of 3.6 and 3.4%, respectively. Overall average recovery was 99.4% (n = 10). Average recovery for individual spiked samples ranged from 94.1 to 106% (n = 10). It is expected that the method could be applied to a variety of low-carbohydrate foods and beverages.

  11. Multicamera Real-Time 3D Modeling for Telepresence and Remote Collaboration

    Directory of Open Access Journals (Sweden)

    Benjamin Petit

    2010-01-01

    inside a distributed vision pipeline. It also adopts the hierarchical component approach of the FlowVR middleware to enforce software modularity and enable distributed executions. Results show high refresh rates and low latencies obtained by taking advantage of the I/O and computing resources of PC clusters. The applications we have developed demonstrate the quality of the visual and mechanical presence with a single platform and with a dual platform that allows telecollaboration.

  12. ROSAPL: towards a heterogeneous multi‐robot system and Human interaction framework

    OpenAIRE

    Boronat Roselló, Emili

    2014-01-01

    The appearance of numerous robotic frameworks and middleware has provided researchers with reliable hardware and software units avoiding the need of developing ad-hoc platforms and focus their work on how improve the robots' high-level capabilities and behaviours. Despite this none of these are facilitating frameworks considering social capabilities as a factor in robots design. In a world that everyday seems more and more connected, with the slow but steady advance of th...

  13. Does High Educational Attainment Limit the Availability of Romantic Partners?

    Science.gov (United States)

    Burt, Isaac; Lewis, Sally V.; Beverly, Monifa G.; Patel, Samir H.

    2010-01-01

    Research indicates that highly educated individuals endure hardships in finding suitable romantic partners. Romantic hardships affect social and emotional adjustment levels, leading to low self-efficacy in relationship decision making. To address the need for research pertaining to this topic, the authors explored the experiences of eight…

  14. A review of currently available high performance interactive graphics systems

    International Nuclear Information System (INIS)

    Clark, S.A.; Harvey, J.

    1981-12-01

    A survey of several interactive graphics systems is given, all but one of which being based on calligraphic technology, which are being considered for a new High Energy Physics graphics facility at RAL. A brief outline of the system architectures is given, the detailed features being summarised in an appendix, and their relative merits are discussed. (U.K.)

  15. Central serotonin transporter availability in highly obese individuals compared with non-obese controls: A [11C] DASB positron emission tomography study

    International Nuclear Information System (INIS)

    Hesse, Swen; Sabri, Osama; Rullmann, Michael; Luthardt, Julia; Becker, Georg-Alexander; Bresch, Anke; Patt, Marianne; Meyer, Philipp M.; Winter, Karsten; Hankir, Mohammed K.; Zientek, Franziska; Reissig, Georg; Drabe, Mandy; Regenthal, Ralf; Schinke, Christian; Arelin, Katrin; Lobsien, Donald; Fasshauer, Mathias; Fenske, Wiebke K.; Stumvoll, Michael; Blueher, Matthias

    2016-01-01

    The role of the central serotonin (5-hydroxytryptamine, 5-HT) system in feeding has been extensively studied in animals with the 5-HT family of transporters (5-HTT) being identified as key molecules in the regulation of satiety and body weight. Aberrant 5-HT transmission has been implicated in the pathogenesis of human obesity by in vivo positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques. However, results obtained thus far from studies of central 5-HTT availability have been inconsistent, which is thought to be brought about mainly by the low number of individuals with a high body mass index (BMI) previously used. The aim of this study was therefore to assess 5-HTT availability in the brains of highly obese otherwise healthy individuals compared with non-obese healthy controls. We performed PET using the 5-HTT selective radiotracer [ 11 C] DASB on 30 highly obese (BMI range between 35 and 55 kg/m 2 ) and 15 age- and sex-matched non-obese volunteers (BMI range between 19 and 27 kg/m 2 ) in a cross-sectional study design. The 5-HTT binding potential (BP ND ) was used as the outcome parameter. On a group level, there was no significant difference in 5-HTT BP ND in various cortical and subcortical regions in individuals with the highest BMI compared with non-obese controls, while statistical models showed minor effects of age, sex, and the degree of depression on 5-HTT BP ND . The overall finding of a lack of significantly altered 5-HTT availability together with its high variance in obese individuals justifies the investigation of individual behavioral responses to external and internal cues which may further define distinct phenotypes and subgroups in human obesity. (orig.)

  16. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP'09)

    Science.gov (United States)

    Gruntorad, Jan; Lokajicek, Milos

    2010-11-01

    The 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held on 21-27 March 2009 in Prague, Czech Republic. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing experience and needs for the community, and to review recent, ongoing and future activities. Recent conferences were held in Victoria, Canada 2007, Mumbai, India in 2006, Interlaken, Switzerland in 2004, San Diego, USA in 2003, Beijing, China in 2001, Padua, Italy in 2000. The CHEP'09 conference had 600 attendees with a program that included plenary sessions of invited oral presentations, a number of parallel sessions comprising 200 oral and 300 poster presentations, and an industrial exhibition. We thanks all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Components, Tools and Databases, Hardware and Computing Fabrics, Grid Middleware and Networking Technologies, Distributed Processing and Analysis and Collaborative Tools. The conference included excursions to Prague and other Czech cities and castles and a banquet held at the Zofin palace in Prague. The next CHEP conference will be held in Taipei, Taiwan on 18-22 October 2010. We would like thank the Ministry of Education Youth and Sports of the Czech Republic and the EU ACEOLE project for the conference support, further to commercial sponsors, the International Advisory Committee, the Local Organizing Committee members representing the five collaborating Czech institutions Jan Gruntorad (co-chair), CESNET, z.s.p.o., Prague Andrej Kugler, Nuclear Physics Institute AS CR v.v.i., Rez Rupert Leitner, Charles University in Prague, Faculty of Mathematics and

  17. ANALYSIS OF AVAILABILITY AND RELIABILITY IN RHIC OPERATIONS

    International Nuclear Information System (INIS)

    PILAT, F.; INGRASSIA, P.; MICHNOFF, R.

    2006-01-01

    RHIC has been successfully operated for 5 years as a collider for different species, ranging from heavy ions including gold and copper, to polarized protons. We present a critical analysis of reliability data for RHIC that not only identifies the principal factors limiting availability but also evaluates critical choices at design times and assess their impact on present machine performance. RHIC availability data are typical when compared to similar high-energy colliders. The critical analysis of operations data is the basis for studies and plans to improve RHIC machine availability beyond the 50-60% typical of high-energy colliders

  18. Availability Improvement of German Nuclear Power Plants

    International Nuclear Information System (INIS)

    Wilhelm, Oliver

    2008-01-01

    High availability is important for the safety and economical performance of Nuclear Power Plants (NPP). The strategy for availability improvement in a typical German PWR shall be discussed here. Key parameters for strategy development are plant design, availability of safety systems, component reliability, preventive maintenance and outage organization. Plant design, availability of safety systems and component reliability are to a greater extent given parameters that can hardly be influenced after the construction of the plant. But they set the frame for maintenance and outage organisation which have shown to have a large influence on the availability of the plant. (author)

  19. Review of available power sources

    International Nuclear Information System (INIS)

    Beard, Carl

    2006-01-01

    Klystrons and triodes have been the accepted choice for particle accelerators because they produce high power RF and offer high gain (60 dB) with efficiencies of ∼50%. Although fairly new to the market, inductive output tubes (IOTs) have become available at L-band frequencies and have maintained their high efficiency. The development of superconducting RF at the L-band frequency allows IOTs to become the choice for future accelerator programs. Due to the operational nature of SRF technology in energy recovery mode, there is no longer the requirement for large amounts of RF power from single sources. This report reviews some of the developments in RF power sources suitable for energy recovery linacs (ERLs)

  20. Why are common quality and development policies needed?

    International Nuclear Information System (INIS)

    Alandes, M; Abad, A; Dini, L; Guerrero, P

    2012-01-01

    The EMI project is based on the collaboration of four major middleware projects in Europe, all already developing middleware products and having their pre-existing strategies for developing, releasing and controlling their software artefacts. In total, the EMI project is made up of about thirty development individual teams, called “Product Teams” in EMI. A Product Team is responsible for the entire lifecycle of specific products or small groups of tightly coupled products, including the development of test-suites to be peer reviewed within the overall certification process. The Quality Assurance in EMI (European Middleware Initiative), as requested by the grid infrastructures and the EU funding agency, must support the teams in providing uniform releases and interoperable middleware distributions, with a common degree of verification and validation of the software and with metrics and objective criteria to compare product quality and evolution over time. In order to achieve these goals, the QA team in EMI has defined and now it monitors the development work and release with a set of comprehensive policies covering all aspects of a software project such as packaging, configuration, documentation, certification, release management and testing. This contribution will present with practical and useful examples the achievements, problems encountered and lessons learned in the definition, implementation and review of Quality Assurance and Development policies. It also describes how these policies have been implemented in the EMI project including the benefits and difficulties encountered by the developers in the project. The main value of this contribution is that all the policies explained are not depending on EMI or grid environments and can be used by any software project.

  1. XACML profile and implementation for authorization interoperability between OSG and EGEE

    International Nuclear Information System (INIS)

    Garzoglio, G; Altunay, M; Chadwick, K; Hesselroth, T D; Levshina, T; Sfiligoi, I; Alderman, I; Miller, Z; Ananthakrishnan, R; Bester, J; Ciaschini, V; Ferraro, A; Forti, A; Demchenko, Y; Groep, D; Koeroo, O; Hover, J; Packard, J; Joie, C La; Sagehaug, H

    2010-01-01

    The Open Science Grid (OSG) and the Enabling Grids for E-sciencE (EGEE) have a common security model, based on Public Key Infrastructure. Grid resources grant access to users because of their membership in a Virtual Organization (VO), rather than on personal identity. Users push VO membership information to resources in the form of identity attributes, thus declaring that resources will be consumed on behalf of a specific group inside the organizational structure of the VO. Resources contact an access policies repository, centralized at each site, to grant the appropriate privileges for that VO group. Before the work in this paper, despite the commonality of the model, OSG and EGEE used different protocols for the communication between resources and the policy repositories. Hence, middleware developed for one Grid could not naturally be deployed on the other Grid, since the authorization module of the middleware would have to be enhanced to support the other Grid's communication protocol. In addition, maintenance and support for different authorization call-out protocols represents a duplication of effort for our relatively small community. To address these issues, OSG and EGEE initiated a joint project on authorization interoperability. The project defined a common communication protocol and attribute identity profile for authorization call-out and provided implementation and integration with major Grid middleware. The activity had resonance with middleware development communities, such as the Globus Toolkit and Condor, who decided to join the collaboration and contribute requirements and software. In this paper, we discuss the main elements of the profile, its implementation, and deployment in EGEE and OSG. We focus in particular on the operations of the authorization infrastructures of both Grids.

  2. XACML profile and implementation for authorization interoperability between OSG and EGEE

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, G.; Alderman, I.; Altunay, M.; Ananthakrishnan, R.; Bester, J.; Chadwick, K.; Ciaschini, V.; Demchenko, Y.; Ferraro, A.; Forti, A.; Groep, D.; /Fermilab /Wisconsin U., Madison /Argonne /INFN, CNAF /Amsterdam U. /NIKHEF, Amsterdam /Brookhaven /SWITCH, Zurich /Bergen Coll. Higher Educ.

    2009-05-01

    The Open Science Grid (OSG) and the Enabling Grids for E-sciencE (EGEE) have a common security model, based on Public Key Infrastructure. Grid resources grant access to users because of their membership in a Virtual Organization (VO), rather than on personal identity. Users push VO membership information to resources in the form of identity attributes, thus declaring that resources will be consumed on behalf of a specific group inside the organizational structure of the VO. Resources contact an access policies repository, centralized at each site, to grant the appropriate privileges for that VO group. Before the work in this paper, despite the commonality of the model, OSG and EGEE used different protocols for the communication between resources and the policy repositories. Hence, middleware developed for one Grid could not naturally be deployed on the other Grid, since the authorization module of the middleware would have to be enhanced to support the other Grid's communication protocol. In addition, maintenance and support for different authorization call-out protocols represents a duplication of effort for our relatively small community. To address these issues, OSG and EGEE initiated a joint project on authorization interoperability. The project defined a common communication protocol and attribute identity profile for authorization call-out and provided implementation and integration with major Grid middleware. The activity had resonance with middleware development communities, such as the Globus Toolkit and Condor, who decided to join the collaboration and contribute requirements and software. In this paper, we discuss the main elements of the profile, its implementation, and deployment in EGEE and OSG. We focus in particular on the operations of the authorization infrastructures of both Grids.

  3. Availability of Supportive Facilities for Effective Teaching

    Directory of Open Access Journals (Sweden)

    Eugene Okyere-Kwakye

    2013-10-01

    Full Text Available Work environment of teachers has been identified by many researchers as one of the key propensity for quality teaching. Unlike the private schools, there has been a continues sentiments that, most government Junior High schools in Ghana do not performance satisfactorily during the Basic Education Certificate Examination (B.E.C.E. As majority of Ghanaian pupils’ school in this sector of education, hence this argument is wealthy of investigation. Therefore the purpose of this study is to identify the availability and the adequacy of certain necessary school facilities within the environment of Junior High Schools in the New Juaben Municipality, Eastern Region of Ghana. Questionnaire was used to collect data from two hundred (200 teachers who were selected from twenty (20 Junior High Schools in the New Juaben Municipality. The results reveal that facilities like furniture for pupil, urinal and toilet facilities and classroom blocks, were available but not adequate. However, computer laboratories, library books, staff common room and teachers’ accommodation were unavailable. Practical Implications of these results are been discussed.

  4. Video personalization for usage environment

    Science.gov (United States)

    Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.

    2002-07-01

    A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.

  5. A seamless ubiquitous emergency medical service for crisis situations.

    Science.gov (United States)

    Lin, Bor-Shing

    2016-04-01

    In crisis situations, a seamless ubiquitous communication is necessary to provide emergency medical service to save people's lives. An excellent prehospital emergency medicine provides immediate medical care to increase the survival rate of patients. On their way to the hospital, ambulance personnel must transmit real-time and uninterrupted patient information to the hospital to apprise the physician of the situation and provide options to the ambulance personnel. In emergency and crisis situations, many communication channels can be unserviceable because of damage to equipment or loss of power. Thus, data transmission over wireless communication to achieve uninterrupted network services is a major obstacle. This study proposes a mobile middleware for cognitive radio (CR) for improving the wireless communication link. CRs can sense their operating environment and optimize the spectrum usage so that the mobile middleware can integrate the existing wireless communication systems with a seamless communication service in heterogeneous network environments. Eventually, the proposed seamless mobile communication middleware was ported into an embedded system, which is compatible with the actual network environment without the need for changing the original system architecture. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. A high-level dynamic analysis approach for studying global process plant availability and production time in the early stages of mining projects

    Directory of Open Access Journals (Sweden)

    Dennis Travagini Cremonese

    Full Text Available Abstract In the early stage of front-end studies of a Mining Project, the global availability (i.e. number of hours a plant is available for production and production (number of hours a plant is actually operated with material time of the process plant are normally assumed based on the experience of the study team. Understanding and defining the availability hours at the early stages of the project are important for the future stages of the project, as drastic changes in work hours will impact the economics of the project at that stage. An innovative high-level dynamic modeling approach has been developed to assist in the rapid evaluation of assumptions made by the study team. This model incorporates systems or equipment that are commonly used in mining projects from mine to product stockyard discharge after the processing plant. It includes subsystems that will simulate all the component handling, and major process plant systems required for a mining project. The output data provided by this high-level dynamic simulation approach will enhance the confidence level of engineering carried out during the early stage of the project. This study discusses the capabilities of the approach, and a test case compared with standard techniques used in mining project front-end studies.

  7. Using MDE to Build a Schizofrenic Middleware for Home/Building Automation

    OpenAIRE

    Nain , Grégory; Daubert , Erwan; Barais , Olivier; Jézéquel , Jean-Marc

    2008-01-01

    International audience; In the personal or corporate spheres, the home/office of tomor- row is soon to be the home/office of today, with a plethora of networked devices embedded in appliances, such as mobile phones, televisions, ther- mostats, and lamps, making it possible to automate and remotely con- trol many basic household functions with a high degree of accuracy. In this domain, technological standardization is still in its infancy, or remains fragmented. The different functionalities o...

  8. A Platform for Mobile Service Provisioning Based on SOA-Integration

    Science.gov (United States)

    Decker, Michael; Bulander, Rebecca

    A middleware platform designed for the provisioning of data services for mobile computers using wireless data communication (e.g. smartphones or PDAs) has to offer a variety of different features. Some of these features have to be provided by external parties, e.g. billing or content syndication. The integration of all these features while considering mobile-specific challenges is a demanding task. In the article at hand we thus describe a middleware platform for mobile services which follows the idea of a so called Enterprise Service Bus (ESB). We explain the concept of ESB and argue why an ESB is an appropriate fundament for a platform for mobile service provisioning.

  9. Availability of healthy snack foods and beverages in stores near high-income urban, low-income urban, and rural elementary and middle schools in Oregon.

    Science.gov (United States)

    Findholt, Nancy E; Izumi, Betty T; Nguyen, Thuan; Pickus, Hayley; Chen, Zunqiu

    2014-08-01

    Food stores near schools are an important source of snacks for children. However, few studies have assessed availability of healthy snacks in these settings. The aim of this study was to assess availability of healthy snack foods and beverages in stores near schools and examine how availability of healthy items varied by poverty level of the school and rural-urban location. Food stores were selected based on their proximity to elementary/middle schools in three categories: high-income urban, low-income urban, and rural. Audits were conducted within the stores to assess the presence or absence of 48 items in single-serving sizes, including healthy beverages, healthy snacks, fresh fruits, and fresh vegetables. Overall, availability of healthy snack foods and beverages was low in all stores. However, there was significant cross-site variability in availability of several snack and fruit items, with stores near high-income urban schools having higher availability, compared to stores near low-income urban and/or rural schools. Stores near rural schools generally had the lowest availability, although several fruits were found more often in rural stores than in urban stores. There were no significant differences in availability of healthy beverages and fresh vegetables across sites. Availability of healthy snack foods and beverages was limited in stores near schools, but these limitations were more severe in stores proximal to rural and low-income schools. Given that children frequent these stores to purchase snacks, efforts to increase the availability of healthy products, especially in stores near rural and low-income schools, should be a priority.

  10. An overview of the DII-HEP OpenStack based CMS data analysis

    Science.gov (United States)

    Osmani, L.; Tarkoma, S.; Eerola, P.; Komu, M.; Kortelainen, M. J.; Kraemer, O.; Lindén, T.; Toor, S.; White, J.

    2015-05-01

    An OpenStack based private cloud with the Cluster File System has been built and used with both CMS analysis and Monte Carlo simulation jobs in the Datacenter Indirection Infrastructure for Secure High Energy Physics (DII-HEP) project. On the cloud we run the ARC middleware that allows running CMS applications without changes on the job submission side. Our test results indicate that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability. To manage the virtual machines (VM) dynamically in an elastic fasion, we are testing the EMI authorization service (Argus) and the Execution Environment Service (Argus-EES). An OpenStackplugin has been developed for Argus-EES. The Host Identity Protocol (HIP) has been designed for mobile networks and it provides a secure method for IP multihoming. HIP separates the end-point identifier and locator role for IP address which increases the network availability for the applications. Our solution leverages HIP for traffic management. This presentation gives an update on the status of the work and our lessons learned in creating an OpenStackbased cloud for HEP.

  11. Innoflight Middleware System (IMS), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Space missions can benefit greatly from the use of the latest COTS processing technology in order to allow spacecraft to perform more onboard computation using less...

  12. Systematic review of available evidence on 11 high-priced inpatient orphan drugs

    NARCIS (Netherlands)

    T.A. Kanters (Tim A.); C. de Sonneville (Caroline); W.K. Redekop (Ken); L. van Hakkaart-van Roijen (Leona)

    2013-01-01

    markdownabstract__Abstract__ __Background__: Attention for Evidence Based Medicine (EBM) is growing, but evidence for orphan drugs is argued to be limited and inferior. This study systematically reviews the available evidence on clinical effectiveness, costeffectiveness and budget impact for

  13. A data Grid prototype for distributed data production in CMS

    International Nuclear Information System (INIS)

    Hafeez, Mehnaz; Samar, Asad; Stockinger, Heinz

    2001-01-01

    The CMS experiment at CERN is setting up a Grid infrastructure required to fulfill the needs imposed by Terabyte scale productions for the next few years. The goal is to automate the production and at the same time allow the users to interact with the system, if required, to make decisions which would optimize performance. We present the architecture, design and functionality of our first working Objectivity file replication prototype. The middle-ware of choice is the Globus toolkit that provides promising functionality. Our results prove the ability of the Globus toolkit to be used as an underlying technology for a world-wide Data Grid. The required data management functionality includes high speed file transfers, secure access to remote files, selection and synchronization of replicas and managing the meta information. The whole system is expected to be flexible enough to incorporate site specific policies. The data management granularity is the file rather than the object level. The first prototype is currently in use for the High Level Trigger (HLT) production (autumn 2000). Owing to these efforts, CMS is one of the pioneers to use the Data Grid functionality in a running production system. The project can be viewed as an evaluator of different strategies, a test for the capabilities of middle-ware tools and a provider of basic Grid functionalities

  14. Cocoon: A lightweight opportunistic networking middleware for community-oriented smart mobile applications

    NARCIS (Netherlands)

    Türkes, Okan; Scholten, Johan; Havinga, Paul J.M.

    2016-01-01

    Modern society is surrounded by an ample spectrum of smart mobile devices. This ubiquity forms a high potential for community-oriented opportunistic ad hoc networking applications. Nevertheless, today’s smart mobile devices such as smartphones, tablets, and wristbands are still onerous to

  15. Achievement of high availability in long-term operation and upgrading plan of the LHD superconducting system

    International Nuclear Information System (INIS)

    Imagawa, S.; Yanagi, N.; Hamaguchi, S.

    2006-10-01

    The Large Helical Device (LHD) that has been demonstrating high performance of heliotron plasma is the world's largest superconducting system. Availability higher than 98% has been achieved in a long-term continuous operation both in the cryogenic system and in the power supply system. It will be owing not only to the robustness of the systems but also to efforts of maintenance and operation. One big problem is shortage of cryogenic stability of a pair of pool-cooled helical coils. Composite conductors had been developed to attain the sufficient stability at high current density. However, it was revealed that a normal-zone could propagate below the cold-end recovery current by additional heat generation due to the slow current diffusion into a thick pure aluminium stabilizer. Besides, a novel detection system with pick-up coils along the helical coils revealed that normal-zones were initiated near the bottom of the coil where the field is not the highest. Therefore, the cooling condition around the innermost layers, the high field area, will be deteriorated at the bottom of the coil by bubbles gathered by buoyancy. In order to raise the operating currents, methods for improving the cryogenic stability have been examined, and stability tests have been carried out with a model coil and small coil samples. The coil temperature is planned to be lowered from 4.4 K to 3.5 K, and the operating current is expected to be increased from 11.0 kA to 12.0 kA that corresponds to 3.0 T at the major radius of 3.6 m. (author)

  16. Software design for the VIS instrument onboard the Euclid mission: a multilayer approach

    Science.gov (United States)

    Galli, E.; Di Giorgio, A. M.; Pezzuto, S.; Liu, S. J.; Giusi, G.; Li Causi, G.; Farina, M.; Cropper, M.; Denniston, J.; Niemi, S.

    2014-07-01

    The Euclid mission scientific payload is composed of two instruments: a VISible Imaging Instrument (VIS) and a Near Infrared Spectrometer and Photometer instrument (NISP). Each instrument has its own control unit. The Instrument Command and Data Processing Unit (VI-CDPU) is the control unit of the VIS instrument. The VI-CDPU is connected directly to the spacecraft by means of a MIL-STD-1553B bus and to the satellite Mass Memory Unit via a SpaceWire link. All the internal interfaces are implemented via SpaceWire links and include 12 high speed lines for the data provided by the 36 focal plane CCDs readout electronics (ROEs) and one link to the Power and Mechanisms Control Unit (VI-PMCU). VI-CDPU is in charge of distributing commands to the instrument sub-systems, collecting their housekeeping parameters and monitoring their health status. Moreover, the unit has the task of acquiring, reordering, compressing and transferring the science data to the satellite Mass Memory. This last feature is probably the most challenging one for the VI-CDPU, since stringent constraints about the minimum lossless compression ratio, the maximum time for the compression execution and the maximum power consumption have to be satisfied. Therefore, an accurate performance analysis at hardware layer is necessary, which could delay too much the design and development of software. In order to mitigate this risk, in the multilayered design of software we decided to design a middleware layer that provides a set of APIs with the aim of hiding the implementation of the HW connected layer to the application one. The middleware is built on top of the Operating System layer (which includes the Real-Time OS that will be adopted) and the onboard Computer Hardware. The middleware itself has a multi-layer architecture composed of 4 layers: the Abstract RTOS Adapter Layer (AOSAL), the Speci_c RTOS Adapter Layer (SOSAL), the Common Patterns Layer (CPL), the Service Layer composed of two subgroups which

  17. Evaluation of a new data staging framework for the ARC middleware

    International Nuclear Information System (INIS)

    Cameron, D; Karpenko, D; Konstantinov, A; Filipčič, A

    2012-01-01

    Staging data to and from remote storage services on the Grid for users’ jobs is a vital component of the ARC computing element. A new data staging framework for the computing element has recently been developed to address issues with the present framework, which has essentially remained unchanged since its original implementation 10 years ago. This new framework consists of an intelligent data transfer scheduler which handles priorities and fair-share, a rapid caching system, and the ability to delegate data transfer over multiple nodes to increase network throughput. This paper uses data from real user jobs running on production ARC sites to present an evaluation of the new framework. It is shown to make more efficient use of the available resources, reduce the overall time to run jobs, and avoid the problems seen with the previous simplistic scheduling system. In addition, its simple design coupled with intelligent logic provides greatly increased flexibility for site administrators, end users and future development.

  18. Web Platform for Sharing Modeling Software in the Field of Nonlinear Optics

    Directory of Open Access Journals (Sweden)

    Dubenskaya Julia

    2018-01-01

    Full Text Available We describe the prototype of a Web platform intended for sharing software programs for computer modeling in the rapidly developing field of the nonlinear optics phenomena. The suggested platform is built on the top of the HUBZero open-source middleware. In addition to the basic HUBZero installation we added to our platform the capability to run Docker containers via an external application server and to send calculation programs to those containers for execution. The presented web platform provides a wide range of features and might be of benefit to nonlinear optics researchers.

  19. ooi: OpenStack OCCI interface

    Directory of Open Access Journals (Sweden)

    Álvaro López García

    2016-01-01

    Full Text Available In this document we present an implementation of the Open Grid Forum’s Open Cloud Computing Interface (OCCI for OpenStack, namely ooi (Openstack occi interface, 2015  [1]. OCCI is an open standard for management tasks over cloud resources, focused on interoperability, portability and integration. ooi aims to implement this open interface for the OpenStack cloud middleware, promoting interoperability with other OCCI-enabled cloud management frameworks and infrastructures. ooi focuses on being non-invasive with a vanilla OpenStack installation, not tied to a particular OpenStack release version.

  20. Improving interoperability through gateways and cots technologies

    CSIR Research Space (South Africa)

    Smith, C

    2013-01-01

    Full Text Available simultaneously, reducing dramatically the time to deploy of the nodes. 4.2 Success History In [7] [8] it’s shown how an out-of-the-box Android smartphone can be used as an information gateway hosting JTRS SCA based public safety waveforms: AM, FM and APCO-P25... and the smartphone. Thus, the entire SCA stack, including the CRC’s Core Framework [8] and the OIS’s CORBA middleware [9] was installed in the Android smartphone. The three SCA compliant waveforms were installed also in the phone. It’s worth to mention...

  1. LHCb: Performance evaluation and capacity planning for a scalable and highly available virtulization infrastructure for the LHCb experiment

    CERN Multimedia

    Sborzacchi, F; Neufeld, N

    2013-01-01

    The virtual computing is often run to satisfy different needs: reduce costs, reduce resources, simplify maintenance and the last but not the least add flexibility. The use of Virtualization in a complex system such as a farm of PCs that control the hardware of an experiment (PLC, power supplies ,gas, magnets..) put us in a condition where not only an High Performance requirements need to be carefully considered but also a deep analysis of strategies to achieve a certain level of High Availability. We conducted a performance evaluation on different and comparable storage/network/virtulization platforms. The performance is measured using a series of independent benchmarks , testing the speed an the stability of multiple VMs runnng heavy-load operations on the I/O of virtualized storage and the virtualized network. The result from the benchmark tests allowed us to study and evaluate how the different workloads of Vm interact with the Hardware/Software resource layers.

  2. Central serotonin transporter availability in highly obese individuals compared with non-obese controls: A [{sup 11}C] DASB positron emission tomography study

    Energy Technology Data Exchange (ETDEWEB)

    Hesse, Swen; Sabri, Osama [University of Leipzig, Department of Nuclear Medicine, Leipzig (Germany); Integrated Research and Treatment Centre Adiposity Diseases Leipzig, Leipzig (Germany); Rullmann, Michael [University of Leipzig, Department of Nuclear Medicine, Leipzig (Germany); Integrated Research and Treatment Centre Adiposity Diseases Leipzig, Leipzig (Germany); Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Leipzig (Germany); Luthardt, Julia; Becker, Georg-Alexander; Bresch, Anke; Patt, Marianne; Meyer, Philipp M. [University of Leipzig, Department of Nuclear Medicine, Leipzig (Germany); Winter, Karsten [University of Leipzig, Centre for Translational Regenerative Medicine, Leipzig (Germany); University of Leipzig, Institute for Medical Informatics, Statistics, and Epidemiology, Leipzig (Germany); Hankir, Mohammed K.; Zientek, Franziska; Reissig, Georg; Drabe, Mandy [Integrated Research and Treatment Centre Adiposity Diseases Leipzig, Leipzig (Germany); Regenthal, Ralf [University of Leipzig, Division of Clinical Pharmacology, Rudolf Boehm Institute of Pharmacology and Toxicology, Leipzig (Germany); Schinke, Christian [University of Leipzig, Department of Neurology, Leipzig (Germany); Arelin, Katrin [Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Leipzig (Germany); University of Leipzig, Day Clinic for Cognitive Neurology, Leipzig (Germany); Lobsien, Donald [University of Leipzig, Department of Neuroradiology, Leipzig (Germany); Fasshauer, Mathias; Fenske, Wiebke K.; Stumvoll, Michael [Integrated Research and Treatment Centre Adiposity Diseases Leipzig, Leipzig (Germany); University of Leipzig, Medical Department III, Leipzig (Germany); Blueher, Matthias [University of Leipzig, Medical Department III, Leipzig (Germany); University of Leipzig, Collaborative Research Centre 1052 Obesity Mechanisms, Leipzig (Germany)

    2016-06-15

    The role of the central serotonin (5-hydroxytryptamine, 5-HT) system in feeding has been extensively studied in animals with the 5-HT family of transporters (5-HTT) being identified as key molecules in the regulation of satiety and body weight. Aberrant 5-HT transmission has been implicated in the pathogenesis of human obesity by in vivo positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques. However, results obtained thus far from studies of central 5-HTT availability have been inconsistent, which is thought to be brought about mainly by the low number of individuals with a high body mass index (BMI) previously used. The aim of this study was therefore to assess 5-HTT availability in the brains of highly obese otherwise healthy individuals compared with non-obese healthy controls. We performed PET using the 5-HTT selective radiotracer [{sup 11}C] DASB on 30 highly obese (BMI range between 35 and 55 kg/m{sup 2}) and 15 age- and sex-matched non-obese volunteers (BMI range between 19 and 27 kg/m{sup 2}) in a cross-sectional study design. The 5-HTT binding potential (BP{sub ND}) was used as the outcome parameter. On a group level, there was no significant difference in 5-HTT BP{sub ND} in various cortical and subcortical regions in individuals with the highest BMI compared with non-obese controls, while statistical models showed minor effects of age, sex, and the degree of depression on 5-HTT BP{sub ND}. The overall finding of a lack of significantly altered 5-HTT availability together with its high variance in obese individuals justifies the investigation of individual behavioral responses to external and internal cues which may further define distinct phenotypes and subgroups in human obesity. (orig.)

  3. PostGIS-Based Heterogeneous Sensor Database Framework for the Sensor Observation Service

    Directory of Open Access Journals (Sweden)

    Ikechukwu Maduako

    2012-10-01

    Full Text Available Environmental monitoring and management systems in most cases deal with models and spatial analytics that involve the integration of in-situ and remote sensor observations. In-situ sensor observations and those gathered by remote sensors are usually provided by different databases and services in real-time dynamic services such as the Geo-Web Services. Thus, data have to be pulled from different databases and transferred over the network before they are fused and processed on the service middleware. This process is very massive and unnecessary communication and work load on the service. Massive work load in large raster downloads from flat-file raster data sources each time a request is made and huge integration and geo-processing work load on the service middleware which could actually be better leveraged at the database level. In this paper, we propose and present a heterogeneous sensor database framework or model for integration, geo-processing and spatial analysis of remote and in-situ sensor observations at the database level.  And how this can be integrated in the Sensor Observation Service, SOS to reduce communication and massive workload on the Geospatial Web Services and as well make query request from the user end a lot more flexible.

  4. Infrastructure for Integration of Legacy Electrical Equipment into a Smart-Grid Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Paulo Régis C. de Araújo

    2018-04-01

    Full Text Available At present, the standardisation of electrical equipment communications is on the rise. In particular, manufacturers are releasing equipment for the smart grid endowed with communication protocols such as DNP3, IEC 61850, and MODBUS. However, there are legacy equipment operating in the electricity distribution network that cannot communicate using any of these protocols. Thus, we propose an infrastructure to allow the integration of legacy electrical equipment to smart grids by using wireless sensor networks (WSNs. In this infrastructure, each legacy electrical device is connected to a sensor node, and the sink node runs a middleware that enables the integration of this device into a smart grid based on suitable communication protocols. This middleware performs tasks such as the translation of messages between the power substation control centre (PSCC and electrical equipment in the smart grid. Moreover, the infrastructure satisfies certain requirements for communication between the electrical equipment and the PSCC, such as enhanced security, short response time, and automatic configuration. The paper’s contributions include a solution that enables electrical companies to integrate their legacy equipment into smart-grid networks relying on any of the above mentioned communication protocols. This integration will reduce the costs related to the modernisation of power substations.

  5. Comparing the Effects of Commercially Available and Custom-Made Video Prompting for Teaching Cooking Skills to High School Students with Autism

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Foster, Ashley L.; Bryant, Kathryn J.

    2013-01-01

    The study compared the effects of using commercially available and custom-made video prompts on the completion of cooking recipes by four high school age males with a diagnosis of autism. An adapted alternating treatments design with continuous baseline, comparison, final treatment, and best treatment condition was used to compare the two…

  6. Assessing Task Migration Impact on Embedded Soft Real-Time Streaming Multimedia Applications

    Directory of Open Access Journals (Sweden)

    Alimonda Andrea

    2008-01-01

    Full Text Available Abstract Multiprocessor systems on chips (MPSoCs are envisioned as the future of embedded platforms such as game-engines, smart-phones and palmtop computers. One of the main challenge preventing the widespread diffusion of these systems is the efficient mapping of multitask multimedia applications on processing elements. Dynamic solutions based on task migration has been recently explored to perform run-time reallocation of task to maximize performance and optimize energy consumption. Even if task migration can provide high flexibility, its overhead must be carefully evaluated when applied to soft real-time applications. In fact, these applications impose deadlines that may be missed during the migration process. In this paper we first present a middleware infrastructure supporting dynamic task allocation for NUMA architectures. Then we perform an extensive characterization of its impact on multimedia soft real-time applications using a software FM Radio benchmark.

  7. Design and implementation of a non-linear symphonic soundtrack of a video game

    Science.gov (United States)

    Sporka, Adam J.; Valta, Jan

    2017-10-01

    The music in the contemporary video games is often interactive. The music playback is based on transitions between pieces of available music material. These transitions happen in response to evolving gameplay. This paradigm is referred to as the adaptive music. Our challenge was to design, create, and implement the soundtrack of the upcoming video game Kingdom Come: Deliverance. Our soundtrack is a collection of compositions with symphonic orchestration. Per our design decision, our intention was to implement the adaptive music in a way which respected the nature of the orchestral film score. We created our own adaptive music middleware, called Sequence Music Engine, implementing a high-level music logic as well as the low-level playback infrastructure. Our system can handle hours of video game music, helps maintain the relevance of the music throughout the video game, and minimises the repetitiveness of the individual pieces.

  8. Assessing Task Migration Impact on Embedded Soft Real-Time Streaming Multimedia Applications

    Directory of Open Access Journals (Sweden)

    Andrea Acquaviva

    2008-01-01

    Full Text Available Multiprocessor systems on chips (MPSoCs are envisioned as the future of embedded platforms such as game-engines, smart-phones and palmtop computers. One of the main challenge preventing the widespread diffusion of these systems is the efficient mapping of multitask multimedia applications on processing elements. Dynamic solutions based on task migration has been recently explored to perform run-time reallocation of task to maximize performance and optimize energy consumption. Even if task migration can provide high flexibility, its overhead must be carefully evaluated when applied to soft real-time applications. In fact, these applications impose deadlines that may be missed during the migration process. In this paper we first present a middleware infrastructure supporting dynamic task allocation for NUMA architectures. Then we perform an extensive characterization of its impact on multimedia soft real-time applications using a software FM Radio benchmark.

  9. Interoperating AliEn and ARC for a Distributed Tier1 in the Nordic Countries

    CERN Document Server

    Gros, Philippe; Lindemann, Jonas; Saiz, Pablo; Zarochentsev, Andrey

    2011-01-01

    To reach its large computing needs, the ALICE experiment at CERN has developed its own middleware called AliEn, centralised and relying on pilot jobs. One of its strength is the automatic installation of the required packages. The Nordic countries have offered a distributed Tier-1 centre for the CERN experiments, where the job management should be done with the NorduGrid middleware ARC. We have developed an interoperation module to allow to unify several computing sites using ARC, and make them look like a single site from the point of view of AliEn. A prototype has been completed and tested out of production. This talk will present implementation details of the system and its performance in tests.

  10. Effect of High Phytase Inclusion Rates on Performance of Broilers Fed Diets Not Severely Limited in Available Phosphorus

    Directory of Open Access Journals (Sweden)

    T. T. dos Santos

    2013-02-01

    Full Text Available Phytate is not only an unavailable source of phosphorus (P for broilers but it also acts as an anti-nutrient, reducing protein and mineral absorption, increasing endogenous losses and reducing broiler performance. The objective of this study was to investigate the anti-nutritional effects of phytate by including high levels of phytase in diets not severely limited in available P. A total of 768 male Arbor Acres broilers were distributed in six treatments of eight replicate pens of 16 birds each consisting of a positive control diet (PC, positive control with 500 FTU/kg phytase, negative control (NC diet with lower available P and calcium (Ca levels and the same NC diet with 500, 1,000 or 1,500 FTU/kg phytase. Body weight gain (BWG, feed intake (FI, feed conversion ratio (FCR and mortality were determined at 21 and 35 d of age while foot ash was determined in four birds per pen at 21 d of age. FI, FCR and foot ash where not affected by the lower mineral diets at 21 d of age nor by the enzyme inclusion but broilers fed lower Ca and available P diets had lower BWG. At 35 d of age no difference was observed between broilers fed the positive or NC diets but broilers fed 500, 1,000 and 1,500 FTU/kg on top of the NC diet had better FCR than broilers fed the positive control diet. When compared to birds fed a diet adequate in P, birds fed the same diet included with 500, 1,000 and 1,500 FTU/kg of phytase in marginally deficient available P and Ca diets had an improvement of performance. These results support the concept that hydrolysing phytate and reducing the anti-nutritional effects of phytate improves bird performance on marginally deficient diets that were not covering the P requirement of birds.

  11. DIRAC: reliable data management for LHCb

    International Nuclear Information System (INIS)

    Smith, A C; Tsaregorodtsev, A

    2008-01-01

    DIRAC, LHCb's Grid Workload and Data Management System, utilizes WLCG resources and middleware components to perform distributed computing tasks satisfying LHCb's Computing Model. The Data Management System (DMS) handles data transfer and data access within LHCb. Its scope ranges from the output of the LHCb Online system to Grid-enabled storage for all data types. It supports metadata for these files in replica and bookkeeping catalogues, allowing dataset selection and localization. The DMS controls the movement of files in a redundant fashion whilst providing utilities for accessing all metadata. To do these tasks effectively the DMS requires complete self integrity between its components and external physical storage. The DMS provides highly redundant management of all LHCb data to leverage available storage resources and to manage transient errors in underlying services. It provides data driven and reliable distribution of files as well as reliable job output upload, utilizing VO Boxes at LHCb Tier1 sites to prevent data loss. This paper presents several examples of mechanisms implemented in the DMS to increase reliability, availability and integrity, highlighting successful design choices and limitations discovered

  12. Folate content and availability in Malaysian cooked foods.

    Science.gov (United States)

    Chew, S C; Khor, G L; Loh, S P

    2012-12-01

    Data on folate availability of Malaysian cooked foods would be useful for estimation of dietary folate intake; however such information is scarce. A total of 53 samples of frequently consumed foods in Malaysia were selected from the Nutrient Composition of Malaysian Foods. Folate content was determined using HPLC method hyphenated with a stainless steel C18 column and ultraviolet detector (lambda = 280 nm). The index of folate availability was defined as the proportion of folate identified as monoglutamyl derivatives from the total folate content. Total folate content of different food samples varied from 30-95 microg/100g fresh weight. Among rice-based dishes, the highest and the lowest total folate was in coconut milk rice (nasi lemak) and ghee rice (nasi minyak), respectively. In noodle dishes, fried rice noodle (kuey teow goreng) and curry noodle (mee kari) had the highest folate contents. The highest index of folate availability was in a flat rice noodle dish (kuey teow bandung) (12.13%), while the lowest was in a festival cake (kuih bakul) (0.13%). Folate content was found to be negatively related to its availability. This study determined folate content and folate availability in commonly consumed cooked foods in Malaysia. The uptake of folate from foods with high folate content may not be necessarily high as folate absorption also depends on the capacity of intestinal deconjugation and the presence of high fibre in the foods.

  13. Using a data-centric event-driven architecture approach in the integration of real-time systems at DTP2

    International Nuclear Information System (INIS)

    Tuominen, Janne; Viinikainen, Mikko; Alho, Pekka; Mattila, Jouni

    2014-01-01

    Integration of heterogeneous and distributed systems is a challenging task, because they might be running on different platforms and written with different implementation languages by multiple organizations. Data-centricity and event-driven architecture (EDA) are concepts that help to implement versatile and well-scaling distributed systems. This paper focuses on the implementation of inter-subsystem communication in a prototype distributed remote handling control system developed at Divertor Test Platform 2 (DTP2). The control system consists of a variety of heterogeneous subsystems, including a client–server web application and hard real-time controllers. A standardized middleware solution (Data Distribution Services (DDS)) that supports a data-centric EDA approach is used to integrate the system. One of the greatest challenges in integrating a system with a data-centric EDA approach is in defining the global data space model. The selected middleware is currently only used for non-deterministic communication. For future application, we evaluated the performance of point-to-point communication with and without the presence of additional network load to ensure applicability to real-time systems. We found that, under certain limitations, the middleware can be used for soft real-time communication. Hard real-time use will require more validation with a more suitable environment

  14. Using a data-centric event-driven architecture approach in the integration of real-time systems at DTP2

    Energy Technology Data Exchange (ETDEWEB)

    Tuominen, Janne, E-mail: janne.m.tuominen@tut.fi; Viinikainen, Mikko; Alho, Pekka; Mattila, Jouni

    2014-10-15

    Integration of heterogeneous and distributed systems is a challenging task, because they might be running on different platforms and written with different implementation languages by multiple organizations. Data-centricity and event-driven architecture (EDA) are concepts that help to implement versatile and well-scaling distributed systems. This paper focuses on the implementation of inter-subsystem communication in a prototype distributed remote handling control system developed at Divertor Test Platform 2 (DTP2). The control system consists of a variety of heterogeneous subsystems, including a client–server web application and hard real-time controllers. A standardized middleware solution (Data Distribution Services (DDS)) that supports a data-centric EDA approach is used to integrate the system. One of the greatest challenges in integrating a system with a data-centric EDA approach is in defining the global data space model. The selected middleware is currently only used for non-deterministic communication. For future application, we evaluated the performance of point-to-point communication with and without the presence of additional network load to ensure applicability to real-time systems. We found that, under certain limitations, the middleware can be used for soft real-time communication. Hard real-time use will require more validation with a more suitable environment.

  15. Availability Analysis of the Ventilation Stack CAM Interlock System

    CERN Document Server

    Young, J

    2000-01-01

    Ventilation Stack Continuous Air Monitor (CAM) Interlock System failure modes, failure frequencies, and system availability have been evaluated for the RPP. The evaluation concludes that CAM availability is as high as assumed in the safety analysis and that the current routine system surveillance is adequate to maintain this availability credited in the safety analysis, nor is such an arrangement predicted to significantly improve system availability.

  16. Design and management of public health outreach using interoperable mobile multimedia: an analysis of a national winter weather preparedness campaign.

    Science.gov (United States)

    Bandera, Cesar

    2016-05-25

    The Office of Public Health Preparedness and Response (OPHPR) in the Centers for Disease Control and Prevention conducts outreach for public preparedness for natural and manmade incidents. In 2011, OPHPR conducted a nationwide mobile public health (m-Health) campaign that pushed brief videos on preparing for severe winter weather onto cell phones, with the objective of evaluating the interoperability of multimedia m-Health outreach with diverse cell phones (including handsets without Internet capability), carriers, and user preferences. Existing OPHPR outreach material on winter weather preparedness was converted into mobile-ready multimedia using mobile marketing best practices to improve audiovisual quality and relevance. Middleware complying with opt-in requirements was developed to push nine bi-weekly multimedia broadcasts onto subscribers' cell phones, and OPHPR promoted the campaign on its web site and to subscribers on its govdelivery.com notification platform. Multimedia, text, and voice messaging activity to/from the middleware was logged and analyzed. Adapting existing media into mobile video was straightforward using open source and commercial software, including web pages, PDF documents, and public service announcements. The middleware successfully delivered all outreach videos to all participants (a total of 504 videos) regardless of the participant's device. 54 % of videos were viewed on cell phones, 32 % on computers, and 14 % were retrieved by search engine web crawlers. 21 % of participating cell phones did not have Internet access, yet still received and displayed all videos. The time from media push to media viewing on cell phones was half that of push to viewing on computers. Video delivered through multimedia messaging can be as interoperable as text messages, while providing much richer information. This may be the only multimedia mechanism available to outreach campaigns targeting vulnerable populations impacted by the digital divide

  17. Núcleo de Control: Controladores modulares en entornos distribuidos

    Directory of Open Access Journals (Sweden)

    Raúl Simarro

    2016-04-01

    Full Text Available Resumen: En este artículo se describe una estrategia de control distribuida, utilizando elementos empotrados, mediante un middleware de control denominado núcleo de control, en el que se implementan controladores digitales de altas prestaciones, diseñados de forma modular, en sistemas con capacidad de cómputo limitada. Se presenta una metodología tanto para la obtención de una métrica que permita la comparación de los distintos modos de funcionamiento ante pérdidas de datos, como para la elección de los parámetros de los controladores a implementar en cada nodo del sistema distribuido de control. Se presentan varios modos de funcionamiento que permite adaptar el sistema desarrollado a distintas situaciones. El trabajo se completa con la implementación de la simulación del sistema distribuido y su prueba sobre un proceso real. Abstract: In this paper a distributed control strategy described using embedded items, using a control middleware called control kernel, in which high performance digital controllers designed in a modular way, on systems with limited computational capabilities are implemented. A methodology is presented for both obtaining a metric that allows comparison of different modes of operation for any loss of data, and for the choice of parameters of the controllers implemented in each node of the distributed control system. Multi-mode which allows adapting the system developed at different situations arise. The work was completed with the implementation of distributed system simulation and test of a real process. Palabras clave: Sistemas empotrados, control predictivo basado en modelo, control de sistemas distribuidos, Keywords: Embedded systems, model predictive control, distributed control systems.

  18. A Preliminary Investigation into CNO Availability Schedule Overruns

    Science.gov (United States)

    2012-06-01

    days PHNSY Pearl Harbor Naval Shipyard and IMF PIA Planned Incremental Availability PIRA Pre-Inactivated Restricted Availability PNSY Portsmouth...DSRA 6 DEM 2 EOH 6 IA 2 PIA 3 MMP 1 ERO 3 SRA 1 PIRA 2 DMP 1 RCD 2 CM 1 DPIA 2 Table 23. High Level Work Stoppage Data Characteristics 1

  19. How valid are commercially available medical simulators?

    Science.gov (United States)

    Stunt, JJ; Wulms, PH; Kerkhoffs, GM; Dankelman, J; van Dijk, CN; Tuijthof, GJM

    2014-01-01

    Background Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary Four hundred and thirty-three commercially available simulators were found, from which 405 (94%) were physical models. One hundred and thirty validation studies evaluated 35 (8%) commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity). Twenty-four (37%) simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity) were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity). Conclusion Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of) validation depends on the difficulty level of skills training and possible consequences when skills are insufficient, it is advisable for medical professionals, trainees, medical educators, and companies who manufacture medical simulators to critically judge the available medical simulators for proper validation. This way adequate, safe, and affordable medical psychomotor skills training can be achieved. PMID:25342926

  20. Assessment of commercially available ion exchange materials for cesium removal from highly alkaline wastes

    International Nuclear Information System (INIS)

    Brooks, K.P.; Kim, A.Y.; Kurath, D.E.

    1996-04-01

    Approximately 61 million gallons of nuclear waste generated in plutonium production, radionuclide removal campaigns, and research and development activities is stored on the Department of Energy's Hanford Site, near Richland, Washington. Although the pretreatment process and disposal requirements are still being defined, most pretreatment scenarios include removal of cesium from the aqueous streams. In many cases, after cesium is removed, the dissolved salt cakes and supernates can be disposed of as LLW. Ion exchange has been a leading candidate for this separation. Ion exchange systems have the advantage of simplicity of equipment and operation and provide many theoretical stages in a small space. The organic ion exchange material Duolite trademark CS-100 has been selected as the baseline exchanger for conceptual design of the Initial Pretreatment Module (IPM). Use of CS-100 was chosen because it is considered a conservative, technologically feasible approach. During FY 96, final resin down-selection will occur for IPM Title 1 design. Alternate ion exchange materials for cesium exchange will be considered at that time. The purpose of this report is to conduct a search for commercially available ion exchange materials which could potentially replace CS-100. This report will provide where possible a comparison of these resin in their ability to remove low concentrations of cesium from highly alkaline solutions. Materials which show promise can be studied further, while less encouraging resins can be eliminated from consideration

  1. Assessment of commercially available ion exchange materials for cesium removal from highly alkaline wastes

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, K.P.; Kim, A.Y.; Kurath, D.E.

    1996-04-01

    Approximately 61 million gallons of nuclear waste generated in plutonium production, radionuclide removal campaigns, and research and development activities is stored on the Department of Energy`s Hanford Site, near Richland, Washington. Although the pretreatment process and disposal requirements are still being defined, most pretreatment scenarios include removal of cesium from the aqueous streams. In many cases, after cesium is removed, the dissolved salt cakes and supernates can be disposed of as LLW. Ion exchange has been a leading candidate for this separation. Ion exchange systems have the advantage of simplicity of equipment and operation and provide many theoretical stages in a small space. The organic ion exchange material Duolite{trademark} CS-100 has been selected as the baseline exchanger for conceptual design of the Initial Pretreatment Module (IPM). Use of CS-100 was chosen because it is considered a conservative, technologically feasible approach. During FY 96, final resin down-selection will occur for IPM Title 1 design. Alternate ion exchange materials for cesium exchange will be considered at that time. The purpose of this report is to conduct a search for commercially available ion exchange materials which could potentially replace CS-100. This report will provide where possible a comparison of these resin in their ability to remove low concentrations of cesium from highly alkaline solutions. Materials which show promise can be studied further, while less encouraging resins can be eliminated from consideration.

  2. Assessment of a government-subsidized supermarket in a high-need area on household food availability and children's dietary intakes.

    Science.gov (United States)

    Elbel, Brian; Moran, Alyssa; Dixon, L Beth; Kiszko, Kamila; Cantor, Jonathan; Abrams, Courtney; Mijanovich, Tod

    2015-10-01

    To assess the impact of a new government-subsidized supermarket in a high-need area on household food availability and dietary habits in children. A difference-in-difference study design was utilized. Two neighbourhoods in the Bronx, New York City. Outcomes were collected in Morrisania, the target community where the new supermarket was opened, and Highbridge, the comparison community. Parents/caregivers of a child aged 3-10 years residing in Morrisania or Highbridge. Participants were recruited via street intercept at baseline (pre-supermarket opening) and at two follow-up periods (five weeks and one year post-supermarket opening). Analysis is based on 2172 street-intercept surveys and 363 dietary recalls from a sample of predominantly low-income minorities. While there were small, inconsistent changes over the time periods, there were no appreciable differences in availability of healthful or unhealthful foods at home, or in children's dietary intake as a result of the supermarket. The introduction of a government-subsidized supermarket into an underserved neighbourhood in the Bronx did not result in significant changes in household food availability or children's dietary intake. Given the lack of healthful food options in underserved neighbourhoods and need for programmes that promote access, further research is needed to determine whether healthy food retail expansion, alone or with other strategies, can improve food choices of children and their families.

  3. MoKey: A versatile exergame creator for everyday usage.

    Science.gov (United States)

    Eckert, Martina; López, Marcos; Lázaro, Carlos; Meneses, Juan

    2017-11-27

    Currently, virtual applications for physical exercises are highly appreciated as rehabilitation instruments. This article presents a middleware called "MoKey" (Motion Keyboard), which converts standard off-the-shelf software into exergames (exercise games). A configurable set of gestures, captured by a motion capture camera, is translated into the key strokes required by the chosen software. The present study assesses the tool regarding usability and viability on a heterogeneous group of 11 participants, aged 5 to 51, with moderate to severe disabilities, and mostly bound to a wheelchair. In comparison with FAAST (The Flexible Action and Articulated Skeleton Toolkit), MoKey achieved better results in terms of ease of use and computational load. The viability as an exergame creator tool was proven with help of four applications (PowerPoint®, e-book reader, Skype®, and Tetris). Success rates of up to 91% have been achieved, subjective perception was rated with 4.5 points (from 0-5). The middleware provides increased motivation due to the use of favorite software and the advantage of exploiting it for exercise. Used together with communication software or online games, social inclusion can be stimulated. The therapists can employ the tool to monitor the correctness and progress of the exercises.

  4. Machine availability at the Large Hardron Collider

    CERN Document Server

    Pojer, M; Wagner, S

    2012-01-01

    One of the most important parameters for a particle accelerator is its uptime, the period of time when it is functioning and available for use. In its second year of operation, the Large Hadron Collider (LHC) has experienced high machine availability, which is one of the ingredients of its brilliant performance. Some of the reasons for the observed MTBF are presented. The approach of periodic maintenance stops is also discussed. Some considerations on the ideal length of a physics fill are drawn.

  5. Machine availability at the Large Hardron Collider

    OpenAIRE

    Pojer, M; Schmidt, R; Wagner, S

    2012-01-01

    One of the most important parameters for a particle accelerator is its uptime, the period of time when it is functioning and available for use. In its second year of operation, the Large Hadron Collider (LHC) has experienced high machine availability, which is one of the ingredients of its brilliant performance. Some of the reasons for the observed MTBF are presented. The approach of periodic maintenance stops is also discussed. Some considerations on the ideal length of a physics fill are dr...

  6. Land availability for biofuel production.

    Science.gov (United States)

    Cai, Ximing; Zhang, Xiao; Wang, Dingbao

    2011-01-01

    Marginal agricultural land is estimated for biofuel production in Africa, China, Europe, India, South America, and the continental United States, which have major agricultural production capacities. These countries/regions can have 320-702 million hectares of land available if only abandoned and degraded cropland and mixed crop and vegetation land, which are usually of low quality, are accounted. If grassland, savanna, and shrubland with marginal productivity are considered for planting low-input high-diversity (LIHD) mixtures of native perennials as energy crops, the total land availability can increase from 1107-1411 million hectares, depending on if the pasture land is discounted. Planting the second generation of biofuel feedstocks on abandoned and degraded cropland and LIHD perennials on grassland with marginal productivity may fulfill 26-55% of the current world liquid fuel consumption, without affecting the use of land with regular productivity for conventional crops and without affecting the current pasture land. Under the various land use scenarios, Africa may have more than one-third, and Africa and Brazil, together, may have more than half of the total land available for biofuel production. These estimations are based on physical conditions such as soil productivity, land slope, and climate.

  7. High-Fidelity Aerothermal Engineering Analysis for Planetary Probes Using DOTNET Framework and OLAP Cubes Database

    Directory of Open Access Journals (Sweden)

    Prabhakar Subrahmanyam

    2009-01-01

    Full Text Available This publication presents the architecture integration and implementation of various modules in Sparta framework. Sparta is a trajectory engine that is hooked to an Online Analytical Processing (OLAP database for Multi-dimensional analysis capability. OLAP is an Online Analytical Processing database that has a comprehensive list of atmospheric entry probes and their vehicle dimensions, trajectory data, aero-thermal data and material properties like Carbon, Silicon and Carbon-Phenolic based Ablators. An approach is presented for dynamic TPS design. OLAP has the capability to run in one simulation several different trajectory conditions and the output is stored back into the database and can be queried for appropriate trajectory type. An OLAP simulation can be setup by spawning individual threads to run for three types of trajectory: Nominal, Undershoot and Overshoot trajectory. Sparta graphical user interface provides capabilities to choose from a list of flight vehicles or enter trajectory and geometry information of a vehicle in design. DOTNET framework acts as a middleware layer between the trajectory engine and the user interface and also between the web user interface and the OLAP database. Trajectory output can be obtained in TecPlot format, Excel output or in a KML (Keyhole Markup Language format. Framework employs an API (application programming interface to convert trajectory data into a formatted KML file that is used by Google Earth for simulating Earth-entry fly-by visualizations.

  8. LHCbDIRAC as Apache Mesos microservices

    CERN Multimedia

    Couturier, Ben

    2016-01-01

    The LHCb experiment relies on LHCbDIRAC, an extension of DIRAC, to drive its offline computing. This middleware provides a development framework and a complete set of components for building distributed computing systems. These components are currently installed and ran on virtual machines (VM) or bare metal hardware. Due to the increased load of work, high availability is becoming more and more important for the LHCbDIRAC services, and the current installation model is showing its limitations. Apache Mesos is a cluster manager which aims at abstracting heterogeneous physical resources on which various tasks can be distributed thanks to so called "framework". The Marathon framework is suitable for long running tasks such as the DIRAC services, while the Chronos framework meets the needs of cron-like tasks like the DIRAC agents. A combination of the service discovery tool Consul together with HAProxy allows to expose the running containers to the outside world while hiding their dynamic placements. Such an arc...

  9. COLDEX New Data Acquisition Framework

    CERN Document Server

    Grech, Christian

    2015-01-01

    COLDEX (COLD bore EXperiment) is an experiment of the TE-VSC group installed in the Super Proton Synchrotron (SPS) which mimics a LHC type cryogenic vacuum system. In the framework of the High Luminosity upgrade of the LHC (HL-LHC project), COLDEX has been recommissioned in 2014 in order to validate carbon coatings performances at cryogenic temperature with LHC type beams. To achieve this mission, a data acquisition system is needed to retrieve and store information from the different experiment’s systems (vacuum, cryogenics, controls, safety) and perform specific calculations. This work aimed to completely redesign, implement, test and operate a brand new data acquisition framework based on communication with the experiment’s PLCs for the devices potentially available over network. The communication protocol to the PLCs is based on data retrieval both from CERN middleware infrastructures (CMW, JAPC) and on a novel open source Simatic S7 data exchange package over TCP/IP (libnodave).

  10. DIRAC reliable data management for LHCb

    CERN Document Server

    Smith, A C

    2008-01-01

    DIRAC, LHCb's Grid Workload and Data Management System, utilizes WLCG resources and middleware components to perform distributed computing tasks satisfying LHCb's Computing Model. The Data Management System (DMS) handles data transfer and data access within LHCb. Its scope ranges from the output of the LHCb Online system to Grid-enabled storage for all data types. It supports metadata for these files in replica and bookkeeping catalogues, allowing dataset selection and localization. The DMS controls the movement of files in a redundant fashion whilst providing utilities for accessing all metadata. To do these tasks effectively the DMS requires complete self integrity between its components and external physical storage. The DMS provides highly redundant management of all LHCb data to leverage available storage resources and to manage transient errors in underlying services. It provides data driven and reliable distribution of files as well as reliable job output upload, utilizing VO Boxes at LHCb Tier1 sites ...

  11. A Cross-Platform Tactile Capabilities Interface for Humanoid Robots

    Directory of Open Access Journals (Sweden)

    Jie eMa

    2016-04-01

    Full Text Available This article presents the core elements of a cross-platform tactile capabilities interface (TCI for humanoid arms. The aim of the interface is to reduce the cost of developing humanoid robot capabilities by supporting reuse through cross-platform deployment. The article presents a comparative analysis of existing robot middleware frameworks, as well as the technical details of the TCI framework that builds on the the existing YARP platform. The TCI framework currently includes robot arm actuators with robot skin sensors. It presents such hardware in a platform independent manner, making it possible to write robot control software that can be executed on different robots through the TCI frameworks. The TCI framework supports multiple humanoid platforms and this article also presents a case study of a cross-platform implementation of a set of tactile protective withdrawal reflexes that have been realised on both the Nao and iCub humanoid robot platforms using the same high-level source code.

  12. An E-government Interoperability Platform Supporting Personal Data Protection Regulations

    Directory of Open Access Journals (Sweden)

    Laura González

    2016-08-01

    Full Text Available Public agencies are increasingly required to collaborate with each other in order to provide high-quality e-government services. This collaboration is usually based on the service-oriented approach and supported by interoperability platforms. Such platforms are specialized middleware-based infrastructures enabling the provision, discovery and invocation of interoperable software services. In turn, given that personal data handled by governments are often very sensitive, most governments have developed some sort of legislation focusing on data protection. This paper proposes solutions for monitoring and enforcing data protection laws within an E-government Interoperability Platform. In particular, the proposal addresses requirements posed by the Uruguayan Data Protection Law and the Uruguayan E-government Platform, although it can also be applied in similar scenarios. The solutions are based on well-known integration mechanisms (e.g. Enterprise Service Bus as well as recognized security standards (e.g. eXtensible Access Control Markup Language and were completely prototyped leveraging the SwitchYard ESB product.

  13. Sphagnum-dominated bog systems are highly effective yet variable sources of bio-available iron to marine waters

    International Nuclear Information System (INIS)

    Krachler, Regina; Krachler, Rudolf F.; Wallner, Gabriele; Steier, Peter; El Abiead, Yasin; Wiesinger, Hubert; Jirsa, Franz; Keppler, Bernhard K.

    2016-01-01

    Iron is a micronutrient of particular interest as low levels of iron limit primary production of phytoplankton and carbon fluxes in extended regions of the world's oceans. Sphagnum-peatland runoff is extraordinarily rich in dissolved humic-bound iron. Given that several of the world's largest wetlands are Sphagnum-dominated peatlands, this ecosystem type may serve as one of the major sources of iron to the ocean. Here, we studied five near-coastal creeks in North Scotland using freshwater/seawater mixing experiments of natural creek water and synthetic seawater based on a "5"9Fe radiotracer technique combined with isotopic characterization of dissolved organic carbon by Accelerator Mass Spectrometry. Three of the creeks meander through healthy Sphagnum-dominated peat bogs and the two others through modified peatlands which have been subject to artificial drainage for centuries. The results revealed that, at the time of sampling (August 16–24, 2014), the creeks that run through modified peatlands delivered 11–15 μg iron per liter creek water to seawater, whereas the creeks that run through intact peatlands delivered 350–470 μg iron per liter creek water to seawater. To find out whether this humic-bound iron is bio-available to marine algae, we performed algal growth tests using the unicellular flagellated marine prymnesiophyte Diacronema lutheri and the unicellular marine green alga Chlorella salina, respectively. In both cases, the riverine humic material provided a highly bio-available source of iron to the marine algae. These results add a new item to the list of ecosystem services of Sphagnum-peatlands. - Highlights: • We report that peat-bogs are sources of bio-available iron to marine algae. • This iron is effectively chelated with aquatic humic acids. • The radiocarbon age of the iron-carrying aquatic humic acids was up to 550 years. • Analysis was focused on mixing experiments of iron-rich creek water with seawater. • Drained peatlands with

  14. Sphagnum-dominated bog systems are highly effective yet variable sources of bio-available iron to marine waters

    Energy Technology Data Exchange (ETDEWEB)

    Krachler, Regina, E-mail: regina.krachler@univie.ac.at [Institute of Inorganic Chemistry, University of Vienna, Währingerstraße 42, 1090 Vienna (Austria); Krachler, Rudolf F.; Wallner, Gabriele [Institute of Inorganic Chemistry, University of Vienna, Währingerstraße 42, 1090 Vienna (Austria); Steier, Peter [Isotope Research and Nuclear Physics, University of Vienna, Währingerstraße 17, 1090 Vienna (Austria); El Abiead, Yasin; Wiesinger, Hubert [Institute of Inorganic Chemistry, University of Vienna, Währingerstraße 42, 1090 Vienna (Austria); Jirsa, Franz [Institute of Inorganic Chemistry, University of Vienna, Währingerstraße 42, 1090 Vienna (Austria); University of Johannesburg, Department of Zoology, P. O. Box 524, Auckland Park 2006 (South Africa); Keppler, Bernhard K. [Institute of Inorganic Chemistry, University of Vienna, Währingerstraße 42, 1090 Vienna (Austria)

    2016-06-15

    Iron is a micronutrient of particular interest as low levels of iron limit primary production of phytoplankton and carbon fluxes in extended regions of the world's oceans. Sphagnum-peatland runoff is extraordinarily rich in dissolved humic-bound iron. Given that several of the world's largest wetlands are Sphagnum-dominated peatlands, this ecosystem type may serve as one of the major sources of iron to the ocean. Here, we studied five near-coastal creeks in North Scotland using freshwater/seawater mixing experiments of natural creek water and synthetic seawater based on a {sup 59}Fe radiotracer technique combined with isotopic characterization of dissolved organic carbon by Accelerator Mass Spectrometry. Three of the creeks meander through healthy Sphagnum-dominated peat bogs and the two others through modified peatlands which have been subject to artificial drainage for centuries. The results revealed that, at the time of sampling (August 16–24, 2014), the creeks that run through modified peatlands delivered 11–15 μg iron per liter creek water to seawater, whereas the creeks that run through intact peatlands delivered 350–470 μg iron per liter creek water to seawater. To find out whether this humic-bound iron is bio-available to marine algae, we performed algal growth tests using the unicellular flagellated marine prymnesiophyte Diacronema lutheri and the unicellular marine green alga Chlorella salina, respectively. In both cases, the riverine humic material provided a highly bio-available source of iron to the marine algae. These results add a new item to the list of ecosystem services of Sphagnum-peatlands. - Highlights: • We report that peat-bogs are sources of bio-available iron to marine algae. • This iron is effectively chelated with aquatic humic acids. • The radiocarbon age of the iron-carrying aquatic humic acids was up to 550 years. • Analysis was focused on mixing experiments of iron-rich creek water with seawater. • Drained

  15. How valid are commercially available medical simulators?

    Directory of Open Access Journals (Sweden)

    Stunt JJ

    2014-10-01

    Full Text Available JJ Stunt,1 PH Wulms,2 GM Kerkhoffs,1 J Dankelman,2 CN van Dijk,1 GJM Tuijthof1,2 1Orthopedic Research Center Amsterdam, Department of Orthopedic Surgery, Academic Medical Centre, Amsterdam, the Netherlands; 2Department of Biomechanical Engineering, Faculty of Mechanical, Materials and Maritime Engineering, Delft University of Technology, Delft, the Netherlands Background: Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on these simulators can actually improve clinical performance on live patients. Therefore, the aim of this review is to determine the availability of different types of simulators and the evidence of their validation, to offer insight regarding which simulators are suitable to use in the clinical setting as a training modality. Summary: Four hundred and thirty-three commercially available simulators were found, from which 405 (94% were physical models. One hundred and thirty validation studies evaluated 35 (8% commercially available medical simulators for levels of validity ranging from face to predictive validity. Solely simulators that are used for surgical skills training were validated for the highest validity level (predictive validity. Twenty-four (37% simulators that give objective feedback had been validated. Studies that tested more powerful levels of validity (concurrent and predictive validity were methodologically stronger than studies that tested more elementary levels of validity (face, content, and construct validity. Conclusion: Ninety-three point five percent of the commercially available simulators are not known to be tested for validity. Although the importance of (a high level of validation depends on the difficulty level of skills training and possible consequences when skills are

  16. Supporting Safe Content-Inspection of Web Traffic

    National Research Council Canada - National Science Library

    Pal, Partha; Atighetchi, Michael

    2008-01-01

    ... for various reasons, including security. Software wrappers, firewalls, Web proxies, and a number of middleware constructs all depend on interception to achieve their respective security, fault tolerance, interoperability, or load balancing objectives...

  17. How valid are commercially available medical simulators?

    NARCIS (Netherlands)

    Stunt, J.J.; Wulms, P.H.; Kerkhoffs, G.M.; Dankelman, J.; Van Dijk, C.N.; Tuijthof, G.J.M.

    2014-01-01

    Background: Since simulators offer important advantages, they are increasingly used in medical education and medical skills training that require physical actions. A wide variety of simulators have become commercially available. It is of high importance that evidence is provided that training on

  18. Assessment Of The Availability And Utilization Of Icts For Teaching And Learning In Secondary Schools - Case Of A High School In Kwekwe Zimbabwe.

    Directory of Open Access Journals (Sweden)

    Sibanda Mavellas

    2015-08-01

    Full Text Available This paper looked at the availability of common educational Information communications Technologies ICTs in secondary schools using a high school in Kwekwe Zimbabwe as a case study. Such technologies include computers radios televisions networks wireless technologies interactive boards internet email eLearning applications video conferencing and projectors just to mention but a few. It further assessed whether the available ICTs are being utilized by teachers and students looking at such usage activities as preparation for lessons lesson delivery issuing of assignments research and communications. The research further identified the factors that are hindering the ICT utilization in these schools among them lack of power supply insufficient resources fear of technology lack of interest ICT skills deficiency higher ICT cost and poor physical infrastructure. The findings were tabulated and analyzed. Recommendations were put forward on how to improve ICT availability and utilization at the school and schools in general for the betterment of teaching and learning. Conclusions were drawn from the findings.

  19. Places available

    CERN Multimedia

    2004-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. Places available The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses : Introduction à Outlook : 19.8.2004 (1 journée) Outlook (short course I) : E-mail : 31.8.2004 (2 hours, morning) Outlook (short course II) : Calendar, Tasks and Notes : 31.8.2004 (2 hours, afternoon) Instructor-led WBTechT Study or Follow-up for Microsoft Applications : 7.9.2004 (morning) Outlook (short course III) : Meetings and Delegation : 7.9.2004 (2 hours, afternoon) Introduction ...

  20. Aplicaciones y seguridad en la implementación de competencias prácticas en entornos de gestión del aprendizaje

    Directory of Open Access Journals (Sweden)

    Gil, R.

    2011-12-01

    Full Text Available In this article, some improvements and contributions are introduced into two main features of the Learning Management Systems (LMSs. The first feature is the security and the authentication functionality, where we present a model that combines the traditional authentication based on username and password, with the authentication based on fingerprints. The second feature is the access to remote and virtual laboratories, where we present a middleware architecture that combines the duplicated services provided by both, the laboratories and the LMS, in order to facilitate their integration and to provide a unique access from the LMS to the remote and virtual laboratories.

    En este artículo exponemos algunas mejoras y aportaciones en dos aspectos de los sistemas de gestión de aprendizaje (LMS. El primer aspecto es la seguridad y la autenticación, donde presentamos una modalidad de combinar la autenticación tradicional de contraseña y nombre de usuario con la autenticación por la técnica biométrica de comparación de huellas dactilares. El segundo aspecto es una arquitectura middleware capaz de dar acceso desde los LMS a distintos laboratorios remotos y virtuales (online de forma que, se evita la duplicación de los servicios proporcionados por ambas y se reutilizan los servicios proporcionados por los LMS en las sesiones prácticas.

  1. Full On-Device Stay Points Detection in Smartphones for Location-Based Mobile Applications

    Directory of Open Access Journals (Sweden)

    Rafael Pérez-Torres

    2016-10-01

    Full Text Available The tracking of frequently visited places, also known as stay points, is a critical feature in location-aware mobile applications as a way to adapt the information and services provided to smartphones users according to their moving patterns. Location based applications usually employ the GPS receiver along with Wi-Fi hot-spots and cellular cell tower mechanisms for estimating user location. Typically, fine-grained GPS location data are collected by the smartphone and transferred to dedicated servers for trajectory analysis and stay points detection. Such Mobile Cloud Computing approach has been successfully employed for extending smartphone’s battery lifetime by exchanging computation costs, assuming that on-device stay points detection is prohibitive. In this article, we propose and validate the feasibility of having an alternative event-driven mechanism for stay points detection that is executed fully on-device, and that provides higher energy savings by avoiding communication costs. Our solution is encapsulated in a sensing middleware for Android smartphones, where a stream of GPS location updates is collected in the background, supporting duty cycling schemes, and incrementally analyzed following an event-driven paradigm for stay points detection. To evaluate the performance of the proposed middleware, real world experiments were conducted under different stress levels, validating its power efficiency when compared against a Mobile Cloud Computing oriented solution.

  2. Development of Hardware-in-the-Loop Simulation Based on Gazebo and Pixhawk for Unmanned Aerial Vehicles

    Science.gov (United States)

    Nguyen, Khoa Dang; Ha, Cheolkeun

    2018-04-01

    Hardware-in-the-loop simulation (HILS) is well known as an effective approach in the design of unmanned aerial vehicles (UAV) systems, enabling engineers to test the control algorithm on a hardware board with a UAV model on the software. Performance of HILS is determined by performances of the control algorithm, the developed model, and the signal transfer between the hardware and software. The result of HILS is degraded if any signal could not be transferred to the correct destination. Therefore, this paper aims to develop a middleware software to secure communications in HILS system for testing the operation of a quad-rotor UAV. In our HILS, the Gazebo software is used to generate a nonlinear six-degrees-of-freedom (6DOF) model, sensor model, and 3D visualization for the quad-rotor UAV. Meanwhile, the flight control algorithm is designed and implemented on the Pixhawk hardware. New middleware software, referred to as the control application software (CAS), is proposed to ensure the connection and data transfer between Gazebo and Pixhawk using the multithread structure in Qt Creator. The CAS provides a graphical user interface (GUI), allowing the user to monitor the status of packet transfer, and perform the flight control commands and the real-time tuning parameters for the quad-rotor UAV. Numerical implementations have been performed to prove the effectiveness of the middleware software CAS suggested in this paper.

  3. School wellness policies and foods and beverages available in schools.

    Science.gov (United States)

    Hood, Nancy E; Colabianchi, Natalie; Terry-McElrath, Yvonne M; O'Malley, Patrick M; Johnston, Lloyd D

    2013-08-01

    Since 2006-2007, education agencies (e.g., school districts) participating in U.S. federal meal programs are required to have wellness policies. To date, this is the only federal policy that addresses foods and beverages sold outside of school meals (in competitive venues). To examine the extent to which federally required components of school wellness policies are associated with availability of foods and beverages in competitive venues. Questionnaire data were collected in 2007-2008 through 2010-2011 school years from 892 middle and 1019 high schools in nationally representative samples. School administrators reported the extent to which schools had required wellness policy components (goals, nutrition guidelines, implementation plan/person responsible, stakeholder involvement) and healthier and less-healthy foods and beverages available in competitive venues. Analyses were conducted in 2012. About one third of students (31.8%) were in schools with all four wellness policy components. Predominantly white schools had higher wellness policy scores than other schools. After controlling for school characteristics, higher wellness policy scores were associated with higher availability of low-fat and whole-grain foods and lower availability of regular-fat/sugared foods in middle and high schools. In middle schools, higher scores also were associated with lower availability of 2%/whole milk. High schools with higher scores also had lower sugar-sweetened beverage availability and higher availability of 1%/nonfat milk, fruits/vegetables, and salad bars. Because they are associated with lower availability of less-healthy and higher availability of healthier foods and beverages in competitive venues, federally required components of school wellness policies should be encouraged in all schools. Copyright © 2013 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  4. Organisation of Dietary Control for Nutrition-Training Intervention Involving Periodized Carbohydrate (CHO) Availability and Ketogenic Low CHO High Fat (LCHF) Diet.

    Science.gov (United States)

    Mirtschin, Joanne G; Forbes, Sara F; Cato, Louise E; Heikura, Ida A; Strobel, Nicki; Hall, Rebecca; Burke, Louise M

    2018-02-12

    We describe the implementation of a 3-week dietary intervention in elite race walkers at the Australian Institute of Sport, with a focus on the resources and strategies needed to accomplish a complex study of this scale. Interventions involved: traditional guidelines of high carbohydrate (CHO) availability for all training sessions (HCHO); a periodized CHO diet which integrated sessions with low CHO and high CHO availability within the same total CHO intake, and a ketogenic low-CHO high-fat diet (LCHF). 7-day menus and recipes were constructed for a communal eating setting to meet nutritional goals as well as individualized food preferences and special needs. Menus also included nutrition support pre, during and post-exercise. Daily monitoring, via observation and food checklists, showed that energy and macronutrient targets were achieved: diets were matched for energy (~14.8 MJ/d) and protein (~2.1 g.kg/d), and achieved desired differences for fat and CHO: HCHO and PCHO: CHO = 8.5 g/kg/d, 60% energy; fat = 20% of energy; LCHF: 0.5 g/kg/d CHO, fat = 78% energy. There were no differences in micronutrient intakes or density between HCHO and PCHO diets; however, the micronutrient density of LCHF was significantly lower. Daily food costs per athlete were similar for each diet (~AUDS$27 ± 10). Successful implementation and monitoring of dietary interventions in sports nutrition research of the scale of the present study require meticulous planning and the expertise of chefs and sports dietitians. Different approaches to sports nutrition support raise practical challenges around cost, micronutrient density, accommodation of special needs and sustainability.

  5. Incorporating wind availability into land use regression modelling of air quality in mountainous high-density urban environment.

    Science.gov (United States)

    Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward

    2017-08-01

    Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Estimating photosynthesis with high resolution field spectroscopy in a Mediterranean grassland under different nutrient availability

    Science.gov (United States)

    Perez-Priego, O.; Guan, J.; Fava, F.; Rossini, M.; Wutzler, T.; Moreno, G.; Carrara, A.; Kolle, O.; Schrumpf, M.; Reichstein, M.; Migliavacca, M.

    2014-12-01

    Recent studies have shown how human induced N:P imbalances are affecting essential processes (e.g. photosynthesis, plant growth rate) that lead to important changes in ecosystem structure and function. In this regard, the accuracy of the approaches based on remotely-sensed data for monitoring and modeling gross primary production (GPP) relies on the ability of vegetation indices (VIs) to track the dynamics of vegetation physiological and biophysical properties/variables. Promising results have been recently obtained when Chlorophyll-sensitive VIs and Chlorophyll fluorescence are combined with structural indices in the framework of the Monteith's light use efficiency (LUE) model. However, further ground-based experiments are required to validate LUE model performances, and their capability to be generalized under different nutrient availability conditions. In this study, the overall objective was to investigate the sensitivity of VIs to track short- and long-term GPP variations in a Mediterranean grassland under different N and P fertilization treatments. Spectral VIs were acquired manually using high resolution spectrometers (HR4000, OceanOptics, USA) along a phenological cycle. The VIs examined included photochemical reflectance index (PRI), MERIS terrestrial-chlorophyll index (MTCI) and normalized difference vegetation index (NDVI). Solar-induced chlorophyll fluorescence calculated at the oxygen absorption band O2-A (F760) using spectral fitting methods was also used. Simultaneously, measurements of GPP and environmental variables were conducted using a transient-state canopy chamber. Overall, GPP, F760 and VIs showed a clear seasonal time-trend in all treatments, which was driven by the phenological development of the grassland. Results showed significant differences (p<0.05) in midday GPP values between N and without N addition plots, in particular at the peak of the growing season during the flowering stage and at the end of the season during senescence. While

  7. Middleware Cerberus usando RFID para rastreabilidade bovina

    OpenAIRE

    Silva, Márcio Roberto

    2009-01-01

    A gerência na zootecnia de precisão é cada vez mais necessária para garantia de lucros e a conquista de novos mercados os quais exigem qualidade e certificação. Nesta dissertação é abordado um sistema de rastreabilidade para o controle na bovinocultura utilizando chips RFID. Sendo que a identificação segura dos animais é a base para a rastreabilidade bovina e bubalina. Em vista as exigências do mercado consumidor o governo brasileiro instituiu o Sistema Brasileiro de Identificação e Certifica...

  8. Intelligent Middle-Ware Architecture for Mobile Networks

    Science.gov (United States)

    Rayana, Rayene Ben; Bonnin, Jean-Marie

    Recent advances in electronic and automotive industries as well as in wireless telecommunication technologies have drawn a new picture where each vehicle became “fully networked”. Multiple stake-holders (network operators, drivers, car manufacturers, service providers, etc.) will participate in this emerging market, which could grow following various models. To free the market from technical constraints, it is important to return to the basics of the Internet, i.e., providing embarked devices with a fully operational Internet connectivity (IPv6).

  9. Designing middleware for context awareness in agriculture

    DEFF Research Database (Denmark)

    Kjær, Kristian Ellebæk

    2008-01-01

    , and increasingly advanced GPS units. However, except for location based services, like knowing your location based on GPS, context awareness has not really materialised yet. In modern agriculture, computers are pervasive, but only in the sense that they are present everywhere. All types of equipment, ranging from...

  10. Designing middleware for context awareness in agriculture

    DEFF Research Database (Denmark)

    Kjær, Kristian Ellebæk

    2008-01-01

    feeding- and ventilation systems to tractors have build in computers, and, in most cases, can also be queried or controlled remotely. These systems provide an excellent base for gathering context, which may then be exploited to ease the work of the farmer. Furthermore, additional sensors may collect......More than a decade ago, pervasive computing and context awareness where envisioned as the future of computing [16], initial work concentrating on location, typically indoor. Today, small, handheld computers of various forms and purposes are becoming pervasive in the form of PDAs, mobile phones......, and increasingly advanced GPS units. However, except for location based services, like knowing your location based on GPS, context awareness has not really materialised yet. In modern agriculture, computers are pervasive, but only in the sense that they are present everywhere. All types of equipment, ranging from...

  11. An Adaptive Middleware for Improved Computational Performance

    DEFF Research Database (Denmark)

    Bonnichsen, Lars Frydendal

    , we are improving computational performance by exploiting modern hardware features, such as dynamic voltage-frequency scaling and transactional memory. Adapting software is an iterative process, requiring that we continually revisit it to meet new requirements or realities; a time consuming process......The performance improvements in computer systems over the past 60 years have been fueled by an exponential increase in energy efficiency. In recent years, the phenomenon known as the end of Dennard’s scaling has slowed energy efficiency improvements — but improving computer energy efficiency...... is more important now than ever. Traditionally, most improvements in computer energy efficiency have come from improvements in lithography — the ability to produce smaller transistors — and computer architecture - the ability to apply those transistors efficiently. Since the end of scaling, we have seen...

  12. Iron and zinc availability in maize lines

    Directory of Open Access Journals (Sweden)

    Valéria Aparecida Vieira Queiroz

    2011-09-01

    Full Text Available The aim of this study was to characterize the Zn and Fe availability by phytic acid/Zn and phytic acid/Fe molar ratios, in 22 tropical maize inbred lines with different genetic backgrounds. The Zn and Fe levels were determined by atomic absorption spectrophotometry and the P through colorimetry method. Three screening methods for phytic acid (Phy analysis were tested and one, based on the 2,2'-bipyridine reaction, was select. There was significant variability in the contents of zinc (17.5 to 42 mg.kg-1, iron (12.2 to 36.7 mg.kg-1, phosphorus (230 to 400 mg.100 g-1, phytic acid (484 to 1056 mg.100 g-1, phytic acid P (140 to 293 mg.100 g-1 and available-P (43.5 to 199.5 mg.100 g-1, and in the available-P/total-P ratio (0.14 to 0.50, Phy/Zn (18.0 to 43.5 and Phy/Fe (16.3 to 45.5 molar ratios. Lines 560977, 560978 and 560982 had greater availability of Zn and lines 560975, 560977, 561010 and 5610111 showed better Fe availability. Lines 560975, 560977 and 560978 also showed better available-P/total-P ratio. Thus, the lines 560975, 560977 and 560978 were considered to have the potential for the development of cultivars of maize with high availability of Fe and/or Zn.

  13. Improved Gasifier Availability with Bed Material and Additives

    Energy Technology Data Exchange (ETDEWEB)

    Grootjes, A.J.; Van der Meijden, C.M.; Visser, H.J.M.; Van der Drift, A. [ECN Biomass and Energy Efficiency, Petten (Netherlands)

    2013-07-15

    In order to valorize several feedstock, gasification is one of the technologies developed over the past decades. ECN developed the MILENA gasifier. In order for MILENA to become a commercial success, the gasifier needs to be feedstock flexible, robust and economically sound, operating with high availability. One of the characteristics of MILENA is high efficiency but with a higher tar content, compared to some other Dual Fluidized Bed (DFB) gasifiers. In order to reduce the issues that are associated with high tar levels in the product gas, the effect of a number of primary measures was studied. This paper presents results obtained in the last two years, focused on improving the gasifier availability by conducting experiments in a 25 kWth lab scale MILENA gasifier. Amongst others, gas composition, tar content and calorific value of the product gas were compared. Scanning Electron Microscope analysis was used to investigate bed material changes. Results show that Austrian olivine can be activated by Fuel B as well as by Additive A and B. The water-gas shift reaction is enhanced and the tar content is reduced significantly, especially the heavy tars that dominate the tar dew point. Activated olivine has a calcium-rich layer. The results show that with MILENA, we are able to lower and control the tar dew point, which will possibly increase the availability of a MILENA gasifier.

  14. Quattor: managing (complex) grid sites

    International Nuclear Information System (INIS)

    Jouvin, M

    2008-01-01

    Quattor is a tool developed to efficiently manage fabrics with hundreds or thousands of Linux machines, while still being able to manage smaller clusters easily. It was originally developed inside the European Data Grid (EDG) project and is now in use at more than 50 grid sites running gLite middleware, ranging from small LCG T3s to very large sites like CERN. Quattor's ability to factorize and to reuse common parts of service configurations permitted the development of the QWG templates: a complete set of standard templates to configure the OS and gLite middleware. Any site can just import and customize the configuration without editing the bulk of the templates. Collaboration around these templates results in a very efficient sharing of installation and configuration information between those sites using them

  15. A Highly Available Grid Metadata Catalog

    DEFF Research Database (Denmark)

    Jensen, Henrik Thostrup; Kleist, Joshva

    2009-01-01

    for the system to function. The data model used in the catalog is RDF, which allows users to create theirown name spaces and schemas. Querying is performed using SPARQL. Additionally the catalog can be used as a synchronization mechanism, by utilizing a compare and swap operation. The catalog is accessed using...

  16. Increasing biomass resource availability through supply chain analysis

    International Nuclear Information System (INIS)

    Welfle, Andrew; Gilbert, Paul; Thornley, Patricia

    2014-01-01

    Increased inclusion of biomass in energy strategies all over the world means that greater mobilisation of biomass resources will be required to meet demand. Strategies of many EU countries assume the future use of non-EU sourced biomass. An increasing number of studies call for the UK to consider alternative options, principally to better utilise indigenous resources. This research identifies the indigenous biomass resources that demonstrate the greatest promise for the UK bioenergy sector and evaluates the extent that different supply chain drivers influence resource availability. The analysis finds that the UK's resources with greatest primary bioenergy potential are household wastes (>115 TWh by 2050), energy crops (>100 TWh by 2050) and agricultural residues (>80 TWh by 2050). The availability of biomass waste resources was found to demonstrate great promise for the bioenergy sector, although are highly susceptible to influences, most notably by the focus of adopted waste management strategies. Biomass residue resources were found to be the resource category least susceptible to influence, with relatively high near-term availability that is forecast to increase – therefore representing a potentially robust resource for the bioenergy sector. The near-term availability of UK energy crops was found to be much less significant compared to other resource categories. Energy crops represent long-term potential for the bioenergy sector, although achieving higher limits of availability will be dependent on the successful management of key influencing drivers. The research highlights that the availability of indigenous resources is largely influenced by a few key drivers, this contradicting areas of consensus of current UK bioenergy policy. - Highlights: • As global biomass demand increases, focus is placed indigenous resources. • A Biomass Resource Model is applied to analyse UK biomass supply chain dynamics. • Biomass availability is best increased

  17. Manipulating Carbohydrate Availability Between Twice-Daily Sessions of High-Intensity Interval Training Over 2 Weeks Improves Time-Trial Performance.

    Science.gov (United States)

    Cochran, Andrew J; Myslik, Frank; MacInnis, Martin J; Percival, Michael E; Bishop, David; Tarnopolsky, Mark A; Gibala, Martin J

    2015-10-01

    Commencing some training sessions with reduced carbohydrate (CHO) availability has been shown to enhance skeletal muscle adaptations, but the effect on exercise performance is less clear. We examined whether restricting CHO intake between twice daily sessions of high-intensity interval training (HIIT) augments improvements in exercise performance and mitochondrial content. Eighteen active but not highly trained subjects (peak oxygen uptake [VO2peak] = 44 ± 9 ml/kg/min), matched for age, sex, and fitness, were randomly allocated to two groups. On each of 6 days over 2 weeks, subjects completed two training sessions, each consisting of 5 × 4-min cycling intervals (60% of peak power), interspersed by 2 min of recovery. Subjects ingested either 195 g of CHO (HI-HI group: ~2.3 g/kg) or 17 g of CHO (HI-LO group: ~0.3 g/kg) during the 3-hr period between sessions. The training-induced improvement in 250-kJ time trial performance was greater (p = .02) in the HI-LO group (211 ± 66 W to 244 ± 75 W) compared with the HI-HI group (203 ± 53 W to 219 ± 60 W); however, the increases in mitochondrial content was similar between groups, as reflected by similar increases in citrate synthase maximal activity, citrate synthase protein content and cytochrome c oxidase subunit IV protein content (p > .05 for interaction terms). This is the first study to show that a short-term "train low, compete high" intervention can improve whole-body exercise capacity. Further research is needed to determine whether this type of manipulation can also enhance performance in highly-trained subjects.

  18. Trends in land and water available for outdoor recreation

    Science.gov (United States)

    Lloyd C. Irland; Thomas Rumpf

    1980-01-01

    A data base for assessing the availability of land for outdoor recreation does not exist. Information on related issues such as vandalism, easements, and land posting is scanty. Construction of a data base for assessing land availability should be a high priority for USFS and HCRS, and for SCORP's and the RPA and RCA assessments.

  19. RFID Supply Chain Management System for Naval Logistics

    National Research Council Canada - National Science Library

    McCredie, Alexander

    2005-01-01

    ...) embodied in the structure of a Dynamic Smart Box (DSB). A middle-ware called Inteliware interfaces with the RFID components and computers in the DSB and inputs the requisite data into the Dynamic Smart Manifest...

  20. Plant availability: the target for engineering training

    International Nuclear Information System (INIS)

    Buckley, W.J.; Rath, W.R.; Spitulnik, J.J.

    1986-01-01

    Nuclear plant managers and regulators have always assumed that in-depth operator training is essential to safe, reliable plant operations and, consequently, to high availability. In the aftermath of the reactor accident at Three Mile Island Unit 2, increased emphasis has been placed on systemizing operator training to assure these results are achieved. It can be argued that a systematic approach to engineering training will also have a positive impact on plant availability. In this paper, the authors present arguments to support that contention as well as a suggested approach to planning, documenting, and implementing an engineering training program

  1. Sphagnum-dominated bog systems are highly effective yet variable sources of bio-available iron to marine waters.

    Science.gov (United States)

    Krachler, Regina; Krachler, Rudolf F; Wallner, Gabriele; Steier, Peter; El Abiead, Yasin; Wiesinger, Hubert; Jirsa, Franz; Keppler, Bernhard K

    2016-06-15

    Iron is a micronutrient of particular interest as low levels of iron limit primary production of phytoplankton and carbon fluxes in extended regions of the world's oceans. Sphagnum-peatland runoff is extraordinarily rich in dissolved humic-bound iron. Given that several of the world's largest wetlands are Sphagnum-dominated peatlands, this ecosystem type may serve as one of the major sources of iron to the ocean. Here, we studied five near-coastal creeks in North Scotland using freshwater/seawater mixing experiments of natural creek water and synthetic seawater based on a (59)Fe radiotracer technique combined with isotopic characterization of dissolved organic carbon by Accelerator Mass Spectrometry. Three of the creeks meander through healthy Sphagnum-dominated peat bogs and the two others through modified peatlands which have been subject to artificial drainage for centuries. The results revealed that, at the time of sampling (August 16-24, 2014), the creeks that run through modified peatlands delivered 11-15μg iron per liter creek water to seawater, whereas the creeks that run through intact peatlands delivered 350-470μg iron per liter creek water to seawater. To find out whether this humic-bound iron is bio-available to marine algae, we performed algal growth tests using the unicellular flagellated marine prymnesiophyte Diacronema lutheri and the unicellular marine green alga Chlorella salina, respectively. In both cases, the riverine humic material provided a highly bio-available source of iron to the marine algae. These results add a new item to the list of ecosystem services of Sphagnum-peatlands. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Mechanical ventilators availability survey in Thai ICUs (ICU-RESOURCE I Study).

    Science.gov (United States)

    Chittawatanarat, Kaweesak; Bunburaphong, Thananchai; Champunot, Ratapum

    2014-01-01

    Mechanical ventilators (MV) have been progressing rapidly. New ventilator modes and supportive equipments have been developed. However; the MV status in Thai ICUs was not available. The objective of this report was to describe the MV supply and availability in Thai ICUs and review some important characteristics regarding of the availability of MV MATERIAL AND METHOD: The ICU RESOURCE I study (Mechanical ventilator part) database was used in the present study. Hospital types, MV brands and models were recorded. Statistically significant differences between and among groups were defined as p-value ventilators were also a high proportion of the MVs in Thai ICUs. Bennette and Hamilton were the most highly available MVin this survey. Advanced MV models were more available in academic ICUs (Thai Clinical Trial Registry: TCTR-201200005).

  3. Places available**

    CERN Multimedia

    2003-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: Project Planning with MS-Project : 15 & 22.1.2004 (2 days) Joint PVSS JCOP Framework Course : 2 sessions : 2 - 6.2.2004 and 16 - 20-2-2004 (5 days) Hands-on Introduction to Python Programming : 16 - 18.2.2004 (3 days - free of charge) C++ for Particle Physicists : 8 - 12.3.2004 ( 6 X 4-hour sessions)

  4. Places available**

    CERN Document Server

    2004-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: The JAVA Programming Language Level 1 :9 & 10.1.2004 (2 days) The JAVA Programming Language Level 2 : 11 to 13.1.2004 (3 days) Hands-on Introduction to Python Programming : 16 - 18.2.2004 (3 days - free of charge) CLEAN-2002 : Working in a Cleanroom : 10.3.2004 (afternoon - free of charge) C++ for Particle Physicists : 8 - 12.3.2004...

  5. Places available**

    CERN Document Server

    2003-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval Tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: JAVA 2 Enterprise Edition - Part 1 : WEB Applications : 20 & 21.11.03(2 days) FrontPage 2000 - niveau 1 : 20 & 21.11.03 (2 jours) Oracle 8i : SQL : 3 - 5.12.03 (3 days) Oracle 8i : Programming with PL/SQL : 8 - 10.12.03 (3 days) The JAVA Programming Language - leve...

  6. Places available**

    CERN Multimedia

    2003-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: MATLAB Fundamentals and Programming Techniques (ML01) :2 & 3.12.03 (2 days) Oracle 8i : SQL : 3 - 5.12.03 (3 days) The EDMS MTF in practice : 5.12.03 (afternoon, free of charge) Modeling Dynamic Systems with Simulink (SL01) : 8 & 9.12.03 (2 days) Signal Processing with MATLAB (SG01) : 11 & ...

  7. Places available**

    CERN Multimedia

    2003-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: ACCESS 2000 - niveau 1 : 13 & 14.11.03 (2 jours) C++ for Particle Physicists : 17 – 21.11.03 (6 X 3-hour lectures) Programmation automate Schneider TSX Premium – niveau 2 : 18 – 21.11.03 (4 jours) JAVA 2 Enterprise Edition – Part 1 : WEB Applications : 20 & ...

  8. Favorable cardio-metabolic outcomes following high carbohydrate intake in accordance with the Daniel Fast: A review of available findings

    Directory of Open Access Journals (Sweden)

    Richard Bloomer

    2017-03-01

    Full Text Available The Daniel Fast is a biblically inspired dietary program rich in carbohydrate, most closely resembling a vegan diet but with additional restrictions, including the elimination of processed foods, white flour products, preservatives, additives, sweeteners, caffeine, and alcohol. While no specific requirements are placed on the ingestion of specific percentages of macronutrients, the mean daily carbohydrate intake is by default approximately 60%, while protein and fat intake are 15% and 25%, respectively. Despite a relatively high carbohydrate intake, multiple favorable cardio-metabolic effects are noted when following the plan, in as few as three weeks. This includes improvements in HOMA-IR, which may be at least in part due to the lower glycemic load and high dietary fiber content of the foods consumed. Other notable changes include reductions in systemic inflammation, total and LDL-cholesterol, oxidative stress, blood pressure, and body weight/body fat. Short and moderate-term compliance to the program is excellent-better than most dietary programs, perhaps due to the ad libitum nature of this plan. This paper presents an overview of the Daniel Fast, a carbohydrate-rich dietary program, including relevant findings from both human and animal investigations using this dietary model.

  9. Available phosphorus levels in diets supplemented with phytase for male broilers aged 22 to 42 days kept in a high-temperature environment

    Directory of Open Access Journals (Sweden)

    Tarciso Tizziani

    2016-02-01

    Full Text Available ABSTRACT This study was conducted to evaluate the effect of reduction of the available phosphorus (avP in diets supplemented with 500 FTU/kg phytase on performance, carcass characteristics, and bone mineralization of broilers aged 22 to 42 days kept in a high-temperature environment. A total of 336 Cobb broilers with an average initial weight of 0.883±0.005 kg were distributed in a completely randomized design with six treatments - a positive control (0.354 and 0.309% avP without addition of bacterial phytase for the phases of 22 to 33 and 34 to 42 days, respectively, and another five diets with inclusion of phytase (500 FTU and reduction of the level of avP (0.354, 0.294, 0.233, 0.173, and 0.112%; and 0.309, 0.258, 0.207, 0.156, and 0.106% for the phases of 22 to 33 and 34 to 42 days, respectively - eight replicates, and seven birds per cage. The experimental diets were formulated to meet all nutritional requirements, except for avP and calcium. Birds were kept in climatic chambers at a temperature of 32.2±0.4 °C and air humidity of 65.3±5.9%. Phytase acted by making the phytate P available in diets with reduction in the levels of avP, keeping feed intake, weight gain, feed:gain, and carcass characteristics unchanged. Treatments affected ash and calcium deposition and the Ca:P ratio in the bone; the group fed the diets with 0.112 and 0.106%, from 22 to 33 and 34 to 42 days of age, respectively, obtained the lowest values, although the phosphorus deposition in the bone was not affected. Diets supplemented with 500 FTU of phytase, with available phosphorus reduced to 0.173 and 0.156%, and a fixed Ca:avP ratio of 2.1:1, meet the requirements of broilers aged 22 to 33 and 34 to 42 days, respectively, reared in a high-temperature environment.

  10. Electrical and I and C systems in German nuclear power plants. Safe and highly available until the end of operating life time

    International Nuclear Information System (INIS)

    Bresler, Markus

    2012-01-01

    Electrical and I and C components of German nuclear power plants are often more than 30 years in operation with high availability. This also has to be achieved for the remaining operating time of the plants according to the 13 th amendment of the atomic law. The resulting challenges are extensive: plant availability is more important than ever, facing the end of nuclear energy production in 2022. The support by vendors consequently declined drastically. Plant operators take the challenge having a solid fundament: The accumulated operating experience is seldom recognized in other branches. The experts are communicating in a professional network, relevant data are available and the quality is continuously checked by authorities and consultants. Based on this, current measures are taken: analysis of degradation mechanisms, allocation to components and documentation in a central data base, appraisal of functional capability for the whole range of input and environmental conditions, definition of upgrades and rebuilds, analysis of stored components and components in decommissioning plants, and punctual modernisation measures. (orig.)

  11. Availability increase of conventional power plants; Verfuegbarkeitserhoehung von konventionellen Kraftwerken

    Energy Technology Data Exchange (ETDEWEB)

    Benesch, W.A.

    2004-07-01

    Availability increase need not incur higher cost provided that appropriate planning and quality insurance are observed. Appropriate technology can reduce standstill times. In the sense of high availability, the process chain should have only as many links as absolutely necessary, and the process should be user-friendly and error-tolerant in order to exclude non-availabilities resulting from unforeseeable operating conditions. (orig.)

  12. Water and land availability for energy farming. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Schooley, F.A.; Mara, S.J.; Mendel, D.A.; Meagher, P.C.; So, E.C.

    1979-10-01

    The physical and economic availability of land and water resources for energy farming were determined. Ten water subbasins possessing favorable land and water availabilities were ranked according to their overall potential for biomass production. The study results clearly identify the Southeast as a favorable area for biomass farming. The Northwest and North-Central United States should also be considered on the basis of their highly favorable environmental characteristics. Both high and low estimates of water availability for 1985 and 2000 in each of 99 subbasins were prepared. Subbasins in which surface water consumption was more than 50% of surface water supply were eliminated from the land availability analysis, leaving 71 subbasins to be examined. The amount of acreage potentially available for biomass production in these subbasins was determined through a comparison of estimated average annual net returns developed for conventional agriculture and forestry with net returns for several biomass production options. In addition to a computerized method of ranking subbasins according to their overall potential for biomass production, a methodology for evaluating future energy farm locations was developed. This methodology included a general area selection procedure as well as specific site analysis recommendations. Thirty-five general factors and a five-step site-specific analysis procedure are described.

  13. MASCOT 6: a modern computer-assisted haptic tele-manipulator

    International Nuclear Information System (INIS)

    Skilton, R.; Owen, T.

    2015-01-01

    MASCOT is a two-armed master-slave tele-manipulator device with 7 degrees of freedom per arm. The master and slave movements are linked by force-reflecting servomechanisms, giving the operator a tactile sensation of doing the work. The slave is typically attached to a boom which transports it to the work area. The master is normally located in the control room from where the operator-controlled input actions provide the motion that the Mascot slave will replicate. MASCOT version 4.5 is currently in use at the Joint European Torus (JET) experimental nuclear fusion facility. Its role is to maintain the inside of the reactor vessel without the need for manned entry. The MASCOT-6 project, funded by EFDA, was initiated to address reliability and availability issues arising as a result of obsolete technologies. In particular, the Mascot actuators based around obsolete 2-phase AC induction motors are to be replaced with actuators based on commercial off-the-shelf (COTS) Permanent Magnet Synchronous Motors (PMSMs). As a consequence of its highly integrated, monolithic design, the entire Mascot 4.5 control system, including servo-amplifiers, controllers, control software and HMI (Human Machine Interface)needs to be redesigned. The MASCOT-6 control system is designed to maximise reliability, availability, maintainability, and inspectability (RAMI) of the system, as well as providing significant future-proofing. EtherCAT has been selected as a scalable, modular and extremely fast field-bus to provide communication between the control system and easily replaceable COTS servo drives. MASCOT-6 includes a high-level control system designed with modern software engineering practises in mind, and provides a modular, generic framework which can be extended to cater for any tele-operation or robotic system. Advanced Computer-Aided-Tele-operation features such as dynamic force compensation and load cancellation are presented. Communication between the MASCOT-6 control system and its

  14. Places available**

    CERN Multimedia

    2003-01-01

    If you wish to participate in one of these courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses : EXCEL 2000 - niveau 1 : 20 & 22.10.03 (2 jours) CLEAN-2002 : Working in a Cleanroom (free of charge) : 23.10.03 (half day) The EDMS-MTF in practice (free of charge) :  28 -  30.10.03 (6 half-day sessions) AutoCAD 2002 - Level 1 : 3, 4, 12, 13.11.03 (4 days) LabVIEW TestStand ver. 3 : 4 & 5.11.03 (2 days) Introduction to Pspice : 4.11.03 p.m. (half-day) Hands-on Introduction to Python Programm...

  15. Places available**

    CERN Multimedia

    2004-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt.TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: The JAVA Programming Language Level 1 : 9 & 10.1.2004 (2 days) The JAVA Programming Language Level 2 : 11 to 13.1.2004 (3 days) LabVIEW base 1 : 25 - 27.2.2004 (3 jours) CLEAN-2002 : Working in a Cleanroom : 10.3.2004 (afternoon - free of charge) C++ for Particle Physicists : 8 - 12.3.2004 ( 6 X 4-hour sessions) LabVIEW Basics 1 : 22 - 24.3.20...

  16. Places available**

    CERN Multimedia

    2003-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval Tel. 74924technical.training@cern.ch ** The number of places available may vary. Please check our Web site to find out the current availability. Places are available in the following courses: MATLAB Fundamentals and Programming Techniques (ML01) : 2 & 3.12.03 (2 days) Oracle 8i : SQL : 3 - 5.12.03 (3 days) The EDMS MTF in practice : 5.12.03 (afternoon, free of charge) Modeling Dynamic Systems with Simulink (SL01) : 8 & 9.12.03 (2 days) Signal Processing with MATLAB (SG01) : 11 & 12.12.03 (2 days) The JAVA Programming Language - l...

  17. Amigo - Ambient Intelligence for the networked home environment

    NARCIS (Netherlands)

    Janse, M.D.

    2008-01-01

    The Amigo project develops open, standardized, interoperable middleware and attractive user services for the networked home environment. Fifteen of Europe's leading companies and research organizations in mobile and home networking, software development, consumer electronics and domestic appliances

  18. Availability Analysis of the Ventilation Stack CAM Interlock System

    International Nuclear Information System (INIS)

    YOUNG, J.

    2000-01-01

    Ventilation Stack Continuous Air Monitor (CAM) Interlock System failure modes, failure frequencies and system availability have been evaluated for the RPP. The evaluation concludes that CAM availability is as high as assumed in the safety analysis and that the current routine system surveillance is adequate to maintain this availability. Further, requiring an alarm to actuate upon CAM failure is not necessary to maintain the availability credited in the safety analysis, nor is such an arrangement predicted to significantly improve system availability. However, if CAM failures were only detected by the 92-day functional tests required in the Authorization Basis (AB), CAM availability would be much less than that credited in the safety analysis. Therefore it is recommended that the current surveillance practice of daily simple system checks, 30-day source checks and 92-day functional tests be continued in order to maintain CAM availability

  19. Synergy Between Pathogen Release and Resource Availability in Plant Invasion

    Science.gov (United States)

    Why do some exotic plant species become invasive? Two common hypotheses, increased resource availability and enemy release, may more effectively explain invasion if they favor the same species, and therefore act in concert. This would be expected if plant species adapted to high levels of available ...

  20. Experience with the gLite workload management system in ATLAS Monte Carlo production on LCG

    International Nuclear Information System (INIS)

    Campana, S; Sciaba, A; Rebatto, D

    2008-01-01

    The ATLAS experiment has been running continuous simulated events production since more than two years. A considerable fraction of the jobs is daily submitted and handled via the gLite Workload Management System, which overcomes several limitations of the previous LCG Resource Broker. The gLite WMS has been tested very intensively for the LHC experiments use cases for more than six months, both in terms of performance and reliability. The tests were carried out by the LCG Experiment Integration Support team (in close contact with the experiments) together with the EGEE integration and certification team and the gLite middleware developers. A pragmatic iterative and interactive approach allowed a very quick rollout of fixes and their rapid deployment, together with new functionalities, for the ATLAS production activities. The same approach is being adopted for other middleware components like the gLite and CREAM Computing Elements. In this contribution we will summarize the learning from the gLite WMS testing activity, pointing out the most important achievements and the open issues. In addition, we will present the current situation of the ATLAS simulated event production activity on the EGEE infrastructure based on the gLite WMS, showing the main improvements and benefits from the new middleware. Finally, the gLite WMS is being used by many other VOs, including the LHC experiments. In particular, some statistics will be shown on the CMS experience running WMS user analysis via the WMS