WorldWideScience

Sample records for infrastructure distribution platform

  1. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth; Tracy Rafferty

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scale long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK

  2. 2009 Infrastructure Platform Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Ferrell, John [Office of Energy Efficiency and Renewable Energy (EERE), Washington, DC (United States)

    2009-12-01

    This document summarizes the recommendations and evaluations provided by an independent external panel of experts at the U.S. Department of Energy Biomass program‘s Infrastructure platform review meeting, held on February 19, 2009, at the Marriott Residence Inn, National Harbor, Maryland.

  3. 2011 Biomass Program Platform Peer Review. Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Lindauer, Alicia [Office of Energy Efficiency and Renewable Energy (EERE), Washington, DC (United States)

    2012-02-01

    This document summarizes the recommendations and evaluations provided by an independent external panel of experts at the 2011 U.S. Department of Energy Biomass Program’s Infrastructure Platform Review meeting.

  4. Infrastructure-Less Communication Platform for Off-The-Shelf Android Smartphones.

    Science.gov (United States)

    Oide, Takuma; Abe, Toru; Suganuma, Takuo

    2018-03-04

    As smartphones and other small portable devices become more sophisticated and popular, opportunities for communication and information sharing among such device users have increased. In particular, since it is known that infrastructure-less device-to-device (D2D) communication platforms consisting only of such devices are excellent in terms of, for example, bandwidth efficiency, efforts are being made to merge their information sharing capabilities with conventional infrastructure. However, efficient multi-hop communication is difficult with the D2D communication protocol, and many conventional D2D communication platforms require modifications of the protocol and terminal operating systems (OSs). In response to these issues, this paper reports on a proposed tree-structured D2D communication platform for Android devices that combines Wi-Fi Direct and Wi-Fi functions. The proposed platform, which is expected to be used with general Android 4.0 (or higher) OS equipped terminals, makes it possible to construct an ad hoc network instantaneously without sharing prior knowledge among participating devices. We will show the feasibility of our proposed platform through its design and demonstrate the implementation of a prototype using real devices. In addition, we will report on our investigation into communication delays and stability based on the number of hops and on terminal performance through experimental confirmation experiments.

  5. Oceans 2.0: a Data Management Infrastructure as a Platform

    Science.gov (United States)

    Pirenne, B.; Guillemot, E.

    2012-04-01

    Oceans 2.0: a Data Management Infrastructure as a Platform Benoît Pirenne, Associate Director, IT, NEPTUNE Canada Eric Guillemot, Manager, Software Development, NEPTUNE Canada The Data Management and Archiving System (DMAS) serving the needs of a number of undersea observing networks such as VENUS and NEPTUNE Canada was conceived from the beginning as a Service-Oriented Infrastructure. Its core functional elements (data acquisition, transport, archiving, retrieval and processing) can interact with the outside world using Web Services. Those Web Services can be exploited by a variety of higher level applications. Over the years, DMAS has developed Oceans 2.0: an environment where these techniques are implemented. The environment thereby becomes a platform in that it allows for easy addition of new and advanced features that build upon the tools at the core of the system. The applications that have been developed include: data search and retrieval, including options such as data product generation, data decimation or averaging, etc. dynamic infrastructure description (search all observatory metadata) and visualization data visualization, including dynamic scalar data plots, integrated fast video segment search and viewing Building upon these basic applications are new concepts, coming from the Web 2.0 world that DMAS has added: They allow people equipped only with a web browser to collaborate and contribute their findings or work results to the wider community. Examples include: addition of metadata tags to any part of the infrastructure or to any data item (annotations) ability to edit and execute, share and distribute Matlab code on-line, from a simple web browser, with specific calls within the code to access data ability to interactively and graphically build pipeline processing jobs that can be executed on the cloud web-based, interactive instrument control tools that allow users to truly share the use of the instruments and communicate with each other and last

  6. TEODOOR, a blueprint for distributed terrestrial observation data infrastructures

    Science.gov (United States)

    Kunkel, Ralf; Sorg, Jürgen; Abbrent, Martin; Borg, Erik; Gasche, Rainer; Kolditz, Olaf; Neidl, Frank; Priesack, Eckart; Stender, Vivien

    2017-04-01

    TERENO (TERrestrial ENvironmental Observatories) is an initiative funded by the large research infrastructure program of the Helmholtz Association of Germany. Four observation platforms to facilitate the investigation of consequences of global change for terrestrial ecosys-tems and the socioeconomic implications of these have been implemented and equipped from 2007 until 2013. Data collection, however, is planned to be performed for at least 30 years. TERENO provides series of system variables (e.g. precipitation, runoff, groundwater level, soil moisture, water vapor and trace gases fluxes) for the analysis and prognosis of global change consequences using integrated model systems, which will be used to derive efficient prevention, mitigation and adaptation strategies. Each platform is operated by a different Helmholtz-Institution, which maintains its local data infrastructure. Within the individual observatories, areas with intensive measurement programs have been implemented. Different sensors provide information on various physical parameters like soil moisture, temperatures, ground water levels or gas fluxes. Sensor data from more than 900 stations are collected automatically with a frequency of 20 s-1 up to 2 h-1, summing up to about 2,500,000 data values per day. In addition, three weather radar devices create raster data with a frequency of 12 to 60 h-1. The data are automatically imported into local relational database systems using a common data quality assessment framework, used to handle processing and assessment of heterogeneous environmental observation data. Starting with the way data are imported into the data infrastructure, custom workflows are developed. Data levels implying the underlying data processing, stages of quality assessment and data ac-cessibility are defined. In order to facilitate the acquisition, provision, integration, management and exchange of heterogeneous geospatial resources within a scientific and non-scientific environment

  7. Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform

    Science.gov (United States)

    Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian

    2017-04-01

    The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust

  8. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    Science.gov (United States)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We

  9. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure

    Directory of Open Access Journals (Sweden)

    Gianfranco Manes

    2016-01-01

    Full Text Available This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN architecture, is organised into sub-networks to be positioned in the plant’s critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events.

  10. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure

    Science.gov (United States)

    Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco

    2016-01-01

    This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant’s critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events. PMID:26805832

  11. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure.

    Science.gov (United States)

    Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco

    2016-01-20

    This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant's critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events.

  12. Contribution to global computation infrastructure: inter-platform delegation, integration of standard services and application to high-energy physics

    International Nuclear Information System (INIS)

    Lodygensky, Oleg

    2006-01-01

    The generalization and implementation of the current information resources, particularly the large storing capacities and the networks allow conceiving new methods of work and ways of entertainment. Centralized stand-alone, monolithic computing stations have been gradually replaced by distributed client-tailored architectures which in turn are challenged by the new distributed systems called 'pair-by pair' systems. This migration is no longer with the specialists' realm but users of more modest skills get used with this new techniques for e-mailing commercial information and exchanging various sorts of files on a 'equal-to-equal' basis. Trade, industry and research as well make profits largely of the new technique called 'grid', this new technique of handling information at a global scale. The present work concerns the grid utilisation for computation. A synergy was created with Paris-Sud University at Orsay, between the Information Research Laboratory (LRI) and the Linear Accelerator Laboratory (LAL) in order to foster the works on grid infrastructure of high research interest for LRI and offering new working methods for LAL. The results of the work developed within this inter-disciplinary-collaboration are based on XtremWeb, the research and production platform for global computation elaborated at LRI. First one presents the current status of the large-scale distributed systems, their basic principles and user-oriented architecture. The XtremWeb is then described focusing the modifications which were effected upon both architecture and implementation in order to fulfill optimally the requirements imposed to such a platform. Then one presents studies with the platform allowing a generalization of the inter-grid resources and development of a user-oriented grid adapted to special services, as well,. Finally one presents the operation modes, the problems to solve and the advantages of this new platform for the high-energy research community, the most demanding

  13. Assessment of the biodiesel distribution infrastructure in Canada

    International Nuclear Information System (INIS)

    Lagace, C.

    2007-08-01

    Canada's biodiesel industry is in its infancy, and must work to achieve the demand needed to ensure its development. This assessment of Canada's biodiesel distribution infrastructure was conducted to recommend the most efficient infrastructure pathway for effective biodiesel distribution. The study focused on the establishment of a link between biodiesel supplies and end-users. The current Canadian biodiesel industry was discussed, and future market potentials were outlined. The Canadian distillate product distribution infrastructure was discussed. Technical considerations and compliance issues were reviewed. The following 2 scenarios were used to estimate adaptations and costs for the Canadian market: (1) the use of primary terminals to ensure quality control of biodiesel, and (2) storage in secondary terminals where biodiesel blends are prepared before being transported to retail outlets. The study showed that relevant laboratory training programs are needed as well as proficiency testing programs in order to ensure adequate quality control of biodiesel. Standards for biodiesel distribution are needed, as well as specifications for the heating oil market. It was concluded that this document may prove useful in developing government policy objectives and identifying further research needs. 21 refs., 12 tabs., 13 figs

  14. A distributed parallel genetic algorithm of placement strategy for virtual machines deployment on cloud platform.

    Science.gov (United States)

    Dong, Yu-Shuang; Xu, Gao-Chao; Fu, Xiao-Dong

    2014-01-01

    The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA) of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  15. A Distributed Parallel Genetic Algorithm of Placement Strategy for Virtual Machines Deployment on Cloud Platform

    Directory of Open Access Journals (Sweden)

    Yu-Shuang Dong

    2014-01-01

    Full Text Available The cloud platform provides various services to users. More and more cloud centers provide infrastructure as the main way of operating. To improve the utilization rate of the cloud center and to decrease the operating cost, the cloud center provides services according to requirements of users by sharding the resources with virtualization. Considering both QoS for users and cost saving for cloud computing providers, we try to maximize performance and minimize energy cost as well. In this paper, we propose a distributed parallel genetic algorithm (DPGA of placement strategy for virtual machines deployment on cloud platform. It executes the genetic algorithm parallelly and distributedly on several selected physical hosts in the first stage. Then it continues to execute the genetic algorithm of the second stage with solutions obtained from the first stage as the initial population. The solution calculated by the genetic algorithm of the second stage is the optimal one of the proposed approach. The experimental results show that the proposed placement strategy of VM deployment can ensure QoS for users and it is more effective and more energy efficient than other placement strategies on the cloud platform.

  16. Distributed hash table theory, platforms and applications

    CERN Document Server

    Zhang, Hao; Xie, Haiyong; Yu, Nenghai

    2013-01-01

    This SpringerBrief summarizes the development of Distributed Hash Table in both academic and industrial fields. It covers the main theory, platforms and applications of this key part in distributed systems and applications, especially in large-scale distributed environments. The authors teach the principles of several popular DHT platforms that can solve practical problems such as load balance, multiple replicas, consistency and latency. They also propose DHT-based applications including multicast, anycast, distributed file systems, search, storage, content delivery network, file sharing and c

  17. Evaluative Infrastructures

    DEFF Research Database (Denmark)

    Kornberger, Martin; Pflueger, Dane; Mouritsen, Jan

    2017-01-01

    Platform organizations such as Uber, eBay and Airbnb represent a growing disruptive phenomenon in contemporary capitalism, transforming economic organization, the nature of work, and the distribution of wealth. This paper investigates the accounting practices that underpin this new form...... of organizing, and in doing so confronts a significant challenge within the accounting literature: the need to escape what Hopwood (1996) describes as its “hierarchical consciousness”. In order to do so, this paper develops the concept of evaluative infrastructure which describes accounting practices...

  18. Features of formation of a distributive infrastructure of e-commerce in Russia

    OpenAIRE

    Kaluzhsky, Mikhail

    2015-01-01

    Article about objective laws of formation of a distributive infrastructure of e-commerce. The distributive infrastructure of e-commerce, according to the author, plays an important role in formation of network economy. The author opens strategic value of institutional regulation of distributive logistics for the decision problems of modernization of Russian economy.

  19. An Embedded Software Platform for Distributed Automotive Environment Management

    Directory of Open Access Journals (Sweden)

    Seepold Ralf

    2009-01-01

    Full Text Available This paper discusses an innovative extension of the actual vehicle platforms that integrate intelligent environments in order to carry out e-safety tasks improving the driving security. These platforms are dedicated to automotive environments which are characterized by sensor networks deployed along the vehicles. Since this kind of platform infrastructure is hardly extensible and forms a non-scalable process unit, an embedded OSGi-based UPnP platform extension is proposed in this article. Such extension deploys a compatible and scalable uniform environment that allows to manage the vehicle components heterogeneity and to provide plug and play support, being compatible with all kind of devices and sensors located in a car network. Furthermore, such extension allows to autoregister any kind of external devices, wherever they are located, providing the in-vehicle system with additional services and data supplied by them. This extension also supports service provisioning and connections to external and remote network services using SIP technology.

  20. Measurement of baseline and orientation between distributed aerospace platforms.

    Science.gov (United States)

    Wang, Wen-Qin

    2013-01-01

    Distributed platforms play an important role in aerospace remote sensing, radar navigation, and wireless communication applications. However, besides the requirement of high accurate time and frequency synchronization for coherent signal processing, the baseline between the transmitting platform and receiving platform and the orientation of platform towards each other during data recording must be measured in real time. In this paper, we propose an improved pulsed duplex microwave ranging approach, which allows determining the spatial baseline and orientation between distributed aerospace platforms by the proposed high-precision time-interval estimation method. This approach is novel in the sense that it cancels the effect of oscillator frequency synchronization errors due to separate oscillators that are used in the platforms. Several performance specifications are also discussed. The effectiveness of the approach is verified by simulation results.

  1. Developing an Open Source, Reusable Platform for Distributed Collaborative Information Management in the Early Detection Research Network

    Science.gov (United States)

    Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen; hide

    2012-01-01

    For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.

  2. {cross-disciplinary} Data CyberInfrastructure: A Different Approach to Developing Collaborative Earth and Environmental Science Research Platforms

    Science.gov (United States)

    Lenhardt, W. C.; Krishnamurthy, A.; Blanton, B.; Conway, M.; Coposky, J.; Castillo, C.; Idaszak, R.

    2017-12-01

    An integrated science cyberinfrastructure platform is fast becoming a norm in science, particularly where access to distributed resources, access to compute, data management tools, and collaboration tools are accessible to the end-user scientist without the need to spin up these services on their own. There platforms have various types of labels ranging from data commons to science-as-a-service. They tend to share common features, as outlined above. What tends to distinguish these platforms, however, is their affinity for particular domains, NanoHub - nanomaterials, iPlant - plant biology, Hydroshare - hydrology, and so on. The challenge still remains how to enable these platforms to be more easily adopted for use by other domains. This paper will provide an overview of RENCI's approach to creating a science platform that can be more easily adopted by new communities while also endeavoring to accelerate their research. At RENCI, we started with Hydroshare, but have now worked to generalize the methodology for application to other domains. This new effort is called xDCi, or {cross-disciplinary} Data CyberInfrastructure. We have adopted a broader approach to the challenge of domain adoption and includes two key elements in addition to the technology component. The first of these is how development is operationalized. RENCI implements a DevOps model of continuous development and deployment. This greatly increases the speed by which a new platform can come online and be refined to meet domain needs. DevOps also allows for migration over time, i.e. sustainability. The second element is a concierge model. In addition to the technical elements, and the more responsive development process, RENCI also supports domain adoption of the platform by providing a concierge service— dedicated expertise- in the following areas, Information Technology, Sustainable Software, Data Science, and Sustainability. The success of the RENCI methodology is illustrated by the adoption of the

  3. Geriatric infrastructure, BRAC, and ecosystem service markets? End-of-life decisions for dams, roads, and offshore platforms (Invited)

    Science.gov (United States)

    Doyle, M. W.

    2010-12-01

    US infrastructure expanded dramatically in the mid-20th century, and now includes more than 79,000 dams, 15,000 miles of levees, 3.7 million miles of roads, 600,000 miles of sewer pipe, 500,000 onshore oil wells, and over 4,000 offshore oil platforms. Many structures have been in place for 50 years or more, and an increasing portion of national infrastructure is approaching or exceeding its originally intended design life. Bringing national infrastructure to acceptable levels would cost nearly 10% of the US annual GDP. Decommissioning infrastructure can decrease public spending and increase public safety while facilitating economic expansion and ecological restoration. While most infrastructure remains critical to the national economy, a substantial amount is obsolete or declining in importance. Over 11,000 dams are abandoned, and of nearly 400,000 miles of road on its lands, the U.S. Forest Service considers one-fourth non-essential and often non-functional. Removing obsolete infrastructure allows greater focus and funding on maintaining or improving infrastructure most critical to society. Moreover, a concerted program of infrastructure decommissioning promises significant long-term cost savings, and is a necessary step before more substantial, systematic changes are possible, like those needed to address the new energy sources and shifting climate. One key challenge for infrastructure reform is how to prioritize and implement such a widespread and politically-charged series of decisions. Two approaches are proposed for different scales. For small, private infrastructure, emerging state and federal ecosystem service markets can provide an economic impetus to push infrastructure removal. Ecosystem market mechanisms may also be most effective at identifying those projects with the greatest ecological bang for the buck. Examples where this approach has proved successful include dam removal for stream mitigation under the Clean Water Act, and levee decommissioning on

  4. Data Distribution Service-Based Interoperability Framework for Smart Grid Testbed Infrastructure

    Directory of Open Access Journals (Sweden)

    Tarek A. Youssef

    2016-03-01

    Full Text Available This paper presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discovery feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS.

  5. A Distributed Computational Infrastructure for Science and Education

    Directory of Open Access Journals (Sweden)

    Rustam K. Bazarov

    2014-06-01

    Full Text Available Researchers have lately been paying increasingly more attention to parallel and distributed algorithms for solving high-dimensionality problems. In this regard, the issue of acquiring or renting computational resources becomes a topical one for employees of scientific and educational institutions. This article examines technology and methods for organizing a distributed computational infrastructure. The author addresses the experience of creating a high-performance system powered by existing clusterization and grid computing technology. The approach examined in the article helps minimize financial costs, aggregate territorially distributed computational resources and ensures a more rational use of available computer equipment, eliminating its downtimes.

  6. IBEX: an open infrastructure software platform to facilitate collaborative work in radiomics.

    Science.gov (United States)

    Zhang, Lifei; Fried, David V; Fave, Xenia J; Hunter, Luke A; Yang, Jinzhong; Court, Laurence E

    2015-03-01

    Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (IBEX), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. The IBEX software package was developed using the MATLAB and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, IBEX is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, IBEX provides an integrated development environment on top of MATLAB and c/c++, so users are not limited to its built-in functions. In the IBEX developer studio, users can plug in, debug, and test new algorithms, extending IBEX's functionality. IBEX also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions

  7. Autonomous platform for distributed sensing and actuation over bluetooth

    OpenAIRE

    Carvalhal, Paulo; Coelho, Ezequiel T.; Ferreira, Manuel João Oliveira; Afonso, José A.; Santos, Cristina

    2006-01-01

    This paper presents a short range wireless network platform based on Bluetooth technology and on a Round Robin scheduling algotithm. The main goal is to provide an application independent platform in order to support a distributed data acquisition and control system used to control a model of a greenhouse. This platform enables the advantages of wireless communications while assuring low weight, small energy consumption and reliable communications.

  8. Distributed optical fiber sensors for integrated monitoring of railway infrastructures

    Science.gov (United States)

    Minardo, Aldo; Coscetta, Agnese; Porcaro, Giuseppe; Giannetta, Daniele; Bernini, Romeo; Zeni, Luigi

    2014-05-01

    We propose the application of a distributed optical fiber sensor based on stimulated Brillouin scattering, as an integrated system for safety monitoring of railway infrastructures. The strain distribution was measured dynamically along a 60 meters length of rail track, as well as along a 3-m stone arch bridge. The results indicate that distributed sensing technology is able to provide useful information in railway traffic and safety monitoring.

  9. BUILDING A COMPLETE FREE AND OPEN SOURCE GIS INFRASTRUCTURE FOR HYDROLOGICAL COMPUTING AND DATA PUBLICATION USING GIS.LAB AND GISQUICK PLATFORMS

    Directory of Open Access Journals (Sweden)

    M. Landa

    2017-07-01

    Full Text Available Building a complete free and open source GIS computing and data publication platform can be a relatively easy task. This paper describes an automated deployment of such platform using two open source software projects – GIS.lab and Gisquick. GIS.lab (http: //web.gislab.io is a project for rapid deployment of a complete, centrally managed and horizontally scalable GIS infrastructure in the local area network, data center or cloud. It provides a comprehensive set of free geospatial software seamlessly integrated into one, easy-to-use system. A platform for GIS computing (in our case demonstrated on hydrological data processing requires core components as a geoprocessing server, map server, and a computation engine as eg. GRASS GIS, SAGA, or other similar GIS software. All these components can be rapidly, and automatically deployed by GIS.lab platform. In our demonstrated solution PyWPS is used for serving WPS processes built on the top of GRASS GIS computation platform. GIS.lab can be easily extended by other components running in Docker containers. This approach is shown on Gisquick seamless integration. Gisquick (http://gisquick.org is an open source platform for publishing geospatial data in the sense of rapid sharing of QGIS projects on the web. The platform consists of QGIS plugin, Django-based server application, QGIS server, and web/mobile clients. In this paper is shown how to easily deploy complete open source GIS infrastructure allowing all required operations as data preparation on desktop, data sharing, and geospatial computation as the service. It also includes data publication in the sense of OGC Web Services and importantly also as interactive web mapping applications.

  10. A survey of informatics platforms that enable distributed comparative effectiveness research using multi-institutional heterogeneous clinical data

    Science.gov (United States)

    Sittig, Dean F.; Hazlehurst, Brian L.; Brown, Jeffrey; Murphy, Shawn; Rosenman, Marc; Tarczy-Hornoch, Peter; Wilcox, Adam B.

    2012-01-01

    Comparative Effectiveness Research (CER) has the potential to transform the current healthcare delivery system by identifying the most effective medical and surgical treatments, diagnostic tests, disease prevention methods and ways to deliver care for specific clinical conditions. To be successful, such research requires the identification, capture, aggregation, integration, and analysis of disparate data sources held by different institutions with diverse representations of the relevant clinical events. In an effort to address these diverse demands, there have been multiple new designs and implementations of informatics platforms that provide access to electronic clinical data and the governance infrastructure required for inter-institutional CER. The goal of this manuscript is to help investigators understand why these informatics platforms are required and to compare and contrast six, large-scale, recently funded, CER-focused informatics platform development efforts. We utilized an 8-dimension, socio-technical model of health information technology use to help guide our work. We identified six generic steps that are necessary in any distributed, multi-institutional CER project: data identification, extraction, modeling, aggregation, analysis, and dissemination. We expect that over the next several years these projects will provide answers to many important, and heretofore unanswerable, clinical research questions. PMID:22692259

  11. A survey of informatics platforms that enable distributed comparative effectiveness research using multi-institutional heterogenous clinical data.

    Science.gov (United States)

    Sittig, Dean F; Hazlehurst, Brian L; Brown, Jeffrey; Murphy, Shawn; Rosenman, Marc; Tarczy-Hornoch, Peter; Wilcox, Adam B

    2012-07-01

    Comparative effectiveness research (CER) has the potential to transform the current health care delivery system by identifying the most effective medical and surgical treatments, diagnostic tests, disease prevention methods, and ways to deliver care for specific clinical conditions. To be successful, such research requires the identification, capture, aggregation, integration, and analysis of disparate data sources held by different institutions with diverse representations of the relevant clinical events. In an effort to address these diverse demands, there have been multiple new designs and implementations of informatics platforms that provide access to electronic clinical data and the governance infrastructure required for interinstitutional CER. The goal of this manuscript is to help investigators understand why these informatics platforms are required and to compare and contrast 6 large-scale, recently funded, CER-focused informatics platform development efforts. We utilized an 8-dimension, sociotechnical model of health information technology to help guide our work. We identified 6 generic steps that are necessary in any distributed, multi-institutional CER project: data identification, extraction, modeling, aggregation, analysis, and dissemination. We expect that over the next several years these projects will provide answers to many important, and heretofore unanswerable, clinical research questions.

  12. Multimedia distribution using network coding on the iphone platform

    DEFF Research Database (Denmark)

    Vingelmann, Peter; Pedersen, Morten Videbæk; Fitzek, Frank

    2010-01-01

    This paper looks into the implementation details of random linear network coding on the Apple iPhone and iPod Touch mobile platforms for multimedia distribution. Previous implementations of network coding on this platform failed to achieve a throughput which is sufficient to saturate the WLAN...

  13. Two-Dimensional Key Table-Based Group Key Distribution in Advanced Metering Infrastructure

    Directory of Open Access Journals (Sweden)

    Woong Go

    2014-01-01

    Full Text Available A smart grid provides two-way communication by using the information and communication technology. In order to establish two-way communication, the advanced metering infrastructure (AMI is used in the smart grid as the core infrastructure. This infrastructure consists of smart meters, data collection units, maintenance data management systems, and so on. However, potential security problems of the AMI increase owing to the application of the public network. This is because the transmitted information is electricity consumption data for charging. Thus, in order to establish a secure connection to transmit electricity consumption data, encryption is necessary, for which key distribution is required. Further, a group key is more efficient than a pairwise key in the hierarchical structure of the AMI. Therefore, we propose a group key distribution scheme using a two-dimensional key table through the analysis result of the sensor network group key distribution scheme. The proposed scheme has three phases: group key predistribution, selection of group key generation element, and generation of group key.

  14. Securing a Home Energy Managing Platform

    DEFF Research Database (Denmark)

    Mikkelsen, Søren Aagaard; Jacobsen, Rune Hylsberg

    2016-01-01

    Energy management in households gets increasingly more attention in the struggle to integrate more sustainable energy sources. Especially in the electrical system, smart grid towards a better utilisation of the energy production and distribution infrastructure. The Home Energy Management System...... (HEMS) is a critical infrastructure component in this endeavour. Its main goal is to enable energy services utilising smart devices in the households based on the interest of the residential consumers and external actors. With the role of being both an essential link in the communication infrastructure...... for balancing the electrical grid and a surveillance unit in private homes, security and privacy become essential to address. In this chapter, we identify and address potential threats Home Energy Management Platform (HEMP) developers should consider in the progress of designing architecture, selecting hardware...

  15. The vacuum platform

    Science.gov (United States)

    McNab, A.

    2017-10-01

    This paper describes GridPP’s Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG and other HEP experiments. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack, or an Infrastructure-as-a-Client system such as Vac. The paper describes our experience in using this platform, in developing and operating VM lifecycle managers Vac and Vcycle, and in interacting with VMs provided by LHCb, ATLAS, ALICE, CMS, and the GridPP DIRAC service to run production workloads.

  16. Scaling to diversity: The DERECHOS distributed infrastructure for analyzing and sharing data

    Science.gov (United States)

    Rilee, M. L.; Kuo, K. S.; Clune, T.; Oloso, A.; Brown, P. G.

    2016-12-01

    Integrating Earth Science data from diverse sources such as satellite imagery and simulation output can be expensive and time-consuming, limiting scientific inquiry and the quality of our analyses. Reducing these costs will improve innovation and quality in science. The current Earth Science data infrastructure focuses on downloading data based on requests formed from the search and analysis of associated metadata. And while the data products provided by archives may use the best available data sharing technologies, scientist end-users generally do not have such resources (including staff) available to them. Furthermore, only once an end-user has received the data from multiple diverse sources and has integrated them can the actual analysis and synthesis begin. The cost of getting from idea to where synthesis can start dramatically slows progress. In this presentation we discuss a distributed computational and data storage framework that eliminates much of the aforementioned cost. The SciDB distributed array database is central as it is optimized for scientific computing involving very large arrays, performing better than less specialized frameworks like Spark. Adding spatiotemporal functions to the SciDB creates a powerful platform for analyzing and integrating massive, distributed datasets. SciDB allows Big Earth Data analysis to be performed "in place" without the need for expensive downloads and end-user resources. Spatiotemporal indexing technologies such as the hierarchical triangular mesh enable the compute and storage affinity needed to efficiently perform co-located and conditional analyses minimizing data transfers. These technologies automate the integration of diverse data sources using the framework, a critical step beyond current metadata search and analysis. Instead of downloading data into their idiosyncratic local environments, end-users can generate and share data products integrated from diverse multiple sources using a common shared environment

  17. IBEX: An open infrastructure software platform to facilitate collaborative work in radiomics

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lifei; Yang, Jinzhong [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Fried, David V.; Fave, Xenia J.; Hunter, Luke A.; Court, Laurence E., E-mail: LECourt@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas 77030 (United States)

    2015-03-15

    Purpose: Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (IBEX), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. Methods: The IBEX software package was developed using the MATLAB and C/C++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, IBEX is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, IBEX provides an integrated development environment on top of MATLAB and C/C++, so users are not limited to its built-in functions. In the IBEX developer studio, users can plug in, debug, and test new algorithms, extending IBEX’s functionality. IBEX also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between

  18. IBEX: An open infrastructure software platform to facilitate collaborative work in radiomics

    International Nuclear Information System (INIS)

    Zhang, Lifei; Yang, Jinzhong; Fried, David V.; Fave, Xenia J.; Hunter, Luke A.; Court, Laurence E.

    2015-01-01

    Purpose: Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (IBEX), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. Methods: The IBEX software package was developed using the MATLAB and C/C++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, IBEX is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, IBEX provides an integrated development environment on top of MATLAB and C/C++, so users are not limited to its built-in functions. In the IBEX developer studio, users can plug in, debug, and test new algorithms, extending IBEX’s functionality. IBEX also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between

  19. Establishing a distributed national research infrastructure providing bioinformatics support to life science researchers in Australia.

    Science.gov (United States)

    Schneider, Maria Victoria; Griffin, Philippa C; Tyagi, Sonika; Flannery, Madison; Dayalan, Saravanan; Gladman, Simon; Watson-Haigh, Nathan; Bayer, Philipp E; Charleston, Michael; Cooke, Ira; Cook, Rob; Edwards, Richard J; Edwards, David; Gorse, Dominique; McConville, Malcolm; Powell, David; Wilkins, Marc R; Lonie, Andrew

    2017-06-30

    EMBL Australia Bioinformatics Resource (EMBL-ABR) is a developing national research infrastructure, providing bioinformatics resources and support to life science and biomedical researchers in Australia. EMBL-ABR comprises 10 geographically distributed national nodes with one coordinating hub, with current funding provided through Bioplatforms Australia and the University of Melbourne for its initial 2-year development phase. The EMBL-ABR mission is to: (1) increase Australia's capacity in bioinformatics and data sciences; (2) contribute to the development of training in bioinformatics skills; (3) showcase Australian data sets at an international level and (4) enable engagement in international programs. The activities of EMBL-ABR are focussed in six key areas, aligning with comparable international initiatives such as ELIXIR, CyVerse and NIH Commons. These key areas-Tools, Data, Standards, Platforms, Compute and Training-are described in this article. © The Author 2017. Published by Oxford University Press.

  20. Envri Cluster - a Community-Driven Platform of European Environmental Researcher Infrastructures for Providing Common E-Solutions for Earth Science

    Science.gov (United States)

    Asmi, A.; Sorvari, S.; Kutsch, W. L.; Laj, P.

    2017-12-01

    European long-term environmental research infrastructures (often referred as ESFRI RIs) are the core facilities for providing services for scientists in their quest for understanding and predicting the complex Earth system and its functioning that requires long-term efforts to identify environmental changes (trends, thresholds and resilience, interactions and feedbacks). Many of the research infrastructures originally have been developed to respond to the needs of their specific research communities, however, it is clear that strong collaboration among research infrastructures is needed to serve the trans-boundary research requires exploring scientific questions at the intersection of different scientific fields, conducting joint research projects and developing concepts, devices, and methods that can be used to integrate knowledge. European Environmental research infrastructures have already been successfully worked together for many years and have established a cluster - ENVRI cluster - for their collaborative work. ENVRI cluster act as a collaborative platform where the RIs can jointly agree on the common solutions for their operations, draft strategies and policies and share best practices and knowledge. Supporting project for the ENVRI cluster, ENVRIplus project, brings together 21 European research infrastructures and infrastructure networks to work on joint technical solutions, data interoperability, access management, training, strategies and dissemination efforts. ENVRI cluster act as one stop shop for multidisciplinary RI users, other collaborative initiatives, projects and programmes and coordinates and implement jointly agreed RI strategies.

  1. Web-GIS platform for green infrastructure in Bucharest, Romania

    Science.gov (United States)

    Sercaianu, Mihai; Petrescu, Florian; Aldea, Mihaela; Oana, Luca; Rotaru, George

    2015-06-01

    In the last decade, reducing urban pollution and improving quality of public spaces became a more and more important issue for public administration authorities in Romania. The paper describes the development of a web-GIS solution dedicated to monitoring of the green infrastructure in Bucharest, Romania. Thus, the system allows the urban residents (citizens) to collect themselves and directly report relevant information regarding the current status of the green infrastructure of the city. Consequently, the citizens become an active component of the decision-support process within the public administration. Besides the usual technical characteristics of such geo-information processing systems, due to the complex legal and organizational problems that arise in collecting information directly from the citizens, additional analysis was required concerning, for example, local government involvement, environmental protection agencies regulations or public entities requirements. Designing and implementing the whole information exchange process, based on the active interaction between the citizens and public administration bodies, required the use of the "citizen-sensor" concept deployed with GIS tools. The information collected and reported from the field is related to a lot of factors, which are not always limited to the city level, providing the possibility to consider the green infrastructure as a whole. The "citizen-request" web-GIS for green infrastructure monitoring solution is characterized by a very diverse urban information, due to the fact that the green infrastructure itself is conditioned by a lot of urban elements, such as urban infrastructures, urban infrastructure works and construction density.

  2. A Mobile-based Platform for Big Load Profiles Data Analytics in Non-Advanced Metering Infrastructures

    Directory of Open Access Journals (Sweden)

    Moussa Sherin

    2016-01-01

    Full Text Available With the rapidly increase of electricity demand around the world due to industrialization and urbanization, this turns the availability of precise knowledge about the consumption patterns of consumers to a valuable asset for electricity providers, given the current competitive electricity market. This would allow them to provide satisfactory services in time of load peaks and to control fraud and abuse cases. Despite of this crucial necessity, this is currently very hard to achieve in many developing countries since smart meters or advanced metering infrastructures (AMIs are not yet settled there to monitor and report energy usages. Whereas the communication and information technologies have widely emerged in such nations, allowing the enormous spread of smart devices among population. In this paper, we present mobile-based BLPDA, a novel platform for big data analytics of consumerss’ load profiles (LPs in the absence of AMIs’ establishment. The proposed platform utilizes mobile computing in order to collect the consumptions of consumers, build their LPs, and analyze the aggregated usages data. Thus, allowing electricity providers to have better vision for an enhanced decision making process. The experimental results emphasize the effectiveness of our platform as an adequate alternative for AMIs in developing countries with minimal cost.

  3. A resilient and secure software platform and architecture for distributed spacecraft

    Science.gov (United States)

    Otte, William R.; Dubey, Abhishek; Karsai, Gabor

    2014-06-01

    A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.

  4. Graph processing platforms at scale: practices and experiences

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Seung-Hwan [ORNL; Lee, Sangkeun (Matt) [ORNL; Brown, Tyler C [ORNL; Sukumar, Sreenivas R [ORNL; Ganesh, Gautam [ORNL

    2015-01-01

    Graph analysis unveils hidden associations of data in many phenomena and artifacts, such as road network, social networks, genomic information, and scientific collaboration. Unfortunately, a wide diversity in the characteristics of graphs and graph operations make it challenging to find a right combination of tools and implementation of algorithms to discover desired knowledge from the target data set. This study presents an extensive empirical study of three representative graph processing platforms: Pegasus, GraphX, and Urika. Each system represents a combination of options in data model, processing paradigm, and infrastructure. We benchmarked each platform using three popular graph operations, degree distribution, connected components, and PageRank over a variety of real-world graphs. Our experiments show that each graph processing platform shows different strength, depending the type of graph operations. While Urika performs the best in non-iterative operations like degree distribution, GraphX outputforms iterative operations like connected components and PageRank. In addition, we discuss challenges to optimize the performance of each platform over large scale real world graphs.

  5. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    CERN Document Server

    Andrade, Pedro; Bhatt, Kislay; Chand, Phool; Collados, David; Duggal, Vibhuti; Fuente, Paloma; Hayashi, Soichi; Imamagic, Emir; Joshi, Pradyumna; Kalmady, Rajesh; Karnani, Urvashi; Kumar, Vaibhav; Lapka, Wojciech; Quick, Robert; Tarragon, Jacobo; Teige, Scott; Triantafyllidis, Christos

    2012-01-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO managers, service managers, management), from different middleware providers (ARC, dCache, gLite, UNICORE and VDT), consortiums (WLCG, EMI, EGI, OSG), and operational teams (GOC, OMB, OTAG, CSIRT). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG portal where it is exposed to other clients. This monitoring workflow profits from the i...

  6. Vision and Control for UAVs: A Survey of General Methods andof Inexpensive Platforms for Infrastructure Inspection

    Directory of Open Access Journals (Sweden)

    Koppány Máthé

    2015-06-01

    Full Text Available Unmanned aerial vehicles (UAVs have gained significant attention in recent years. Low-cost platforms using inexpensive sensor payloads have been shown to provide satisfactory flight and navigation capabilities. In this report, we survey vision and control methods that can be applied to low-cost UAVs, and we list some popular inexpensive platforms and application fields where they are useful. We also highlight the sensor suites used where this information is available. We overview, among others, feature detection and tracking, optical flow and visual servoing, low-level stabilization and high-level planning methods. We then list popular low-cost UAVs, selecting mainly quadrotors. We discuss applications, restricting our focus to the field of infrastructure inspection. Finally, as an example, we formulate two use-cases for railway inspection, a less explored application field, and illustrate the usage of the vision and control techniques reviewed by selecting appropriate ones to tackle these use-cases. To select vision methods, we run a thorough set of experimental evaluations.

  7. Climate Science's Globally Distributed Infrastructure

    Science.gov (United States)

    Williams, D. N.

    2016-12-01

    The Earth System Grid Federation (ESGF) is primarily funded by the Department of Energy's (DOE's) Office of Science (the Office of Biological and Environmental Research [BER] Climate Data Informatics Program and the Office of Advanced Scientific Computing Research Next Generation Network for Science Program), the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF), the European Infrastructure for the European Network for Earth System Modeling (IS-ENES), and the Australian National University (ANU). Support also comes from other U.S. federal and international agencies. The federation works across multiple worldwide data centers and spans seven international network organizations to provide users with the ability to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a series of geographically distributed peer nodes that are independently administered and united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP; output used by the Intergovernmental Panel on Climate Change assessment reports), multiple model intercomparison projects (MIPs; endorsed by the World Climate Research Programme [WCRP]), and the Accelerated Climate Modeling for Energy (ACME; ESGF is included in the overarching ACME workflow process to store model output). ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs the global climate science community. Data served by ESGF includes not only model output but also observational data from satellites and instruments, reanalysis, and generated images.

  8. Contribution to global computation infrastructure: inter-platform delegation, integration of standard services and application to high-energy physics; Contribution aux infrastructures de calcul global: delegation inter plates-formes, integration de services standards et application a la physique des hautes energies

    Energy Technology Data Exchange (ETDEWEB)

    Lodygensky, Oleg [Laboratoire de Recherche en Informatique, Laboratoire de l' Accelerateur Lineaire, Bat. 200, 91898 Orsay Cedex (France)

    2006-07-01

    The generalization and implementation of the current information resources, particularly the large storing capacities and the networks allow conceiving new methods of work and ways of entertainment. Centralized stand-alone, monolithic computing stations have been gradually replaced by distributed client-tailored architectures which in turn are challenged by the new distributed systems called 'pair-by pair' systems. This migration is no longer with the specialists' realm but users of more modest skills get used with this new techniques for e-mailing commercial information and exchanging various sorts of files on a 'equal-to-equal' basis. Trade, industry and research as well make profits largely of the new technique called 'grid', this new technique of handling information at a global scale. The present work concerns the grid utilisation for computation. A synergy was created with Paris-Sud University at Orsay, between the Information Research Laboratory (LRI) and the Linear Accelerator Laboratory (LAL) in order to foster the works on grid infrastructure of high research interest for LRI and offering new working methods for LAL. The results of the work developed within this inter-disciplinary-collaboration are based on XtremWeb, the research and production platform for global computation elaborated at LRI. First one presents the current status of the large-scale distributed systems, their basic principles and user-oriented architecture. The XtremWeb is then described focusing the modifications which were effected upon both architecture and implementation in order to fulfill optimally the requirements imposed to such a platform. Then one presents studies with the platform allowing a generalization of the inter-grid resources and development of a user-oriented grid adapted to special services, as well,. Finally one presents the operation modes, the problems to solve and the advantages of this new platform for the high-energy research

  9. Computational Infrastructure for Nuclear Astrophysics

    International Nuclear Information System (INIS)

    Smith, Michael S.; Hix, W. Raphael; Bardayan, Daniel W.; Blackmon, Jeffery C.; Lingerfelt, Eric J.; Scott, Jason P.; Nesaraja, Caroline D.; Chae, Kyungyuk; Guidry, Michael W.; Koura, Hiroyuki; Meyer, Richard A.

    2006-01-01

    A Computational Infrastructure for Nuclear Astrophysics has been developed to streamline the inclusion of the latest nuclear physics data in astrophysics simulations. The infrastructure consists of a platform-independent suite of computer codes that is freely available online at nucastrodata.org. Features of, and future plans for, this software suite are given

  10. Romanian contribution to research infrastructure database for EPOS

    Science.gov (United States)

    Ionescu, Constantin; Craiu, Andreea; Tataru, Dragos; Balan, Stefan; Muntean, Alexandra; Nastase, Eduard; Oaie, Gheorghe; Asimopolos, Laurentiu; Panaiotu, Cristian

    2014-05-01

    European Plate Observation System - EPOS is a long-term plan to facilitate integrated use of data, models and facilities from mainly distributed existing, but also new, research infrastructures for solid Earth Science. In EPOS Preparatory Phase were integrated the national Research Infrastructures at pan European level in order to create the EPOS distributed research infrastructures, structure in which, at the present time, Romania participates by means of the earth science research infrastructures of the national interest declared on the National Roadmap. The mission of EPOS is to build an efficient and comprehensive multidisciplinary research platform for solid Earth Sciences in Europe and to allow the scientific community to study the same phenomena from different points of view, in different time periods and spatial scales (laboratory and field experiments). At national scale, research and monitoring infrastructures have gathered a vast amount of geological and geophysical data, which have been used by research networks to underpin our understanding of the Earth. EPOS promotes the creation of comprehensive national and regional consortia, as well as the organization of collective actions. To serve the EPOS goals, in Romania a group of National Research Institutes, together with their infrastructures, gathered in an EPOS National Consortium, as follows: 1. National Institute for Earth Physics - Seismic, strong motion, GPS and Geomagnetic network and Experimental Laboratory; 2. National Institute of Marine Geology and Geoecology - Marine Research infrastructure and Euxinus integrated regional Black Sea observation and early-warning system; 3. Geological Institute of Romania - Surlari National Geomagnetic Observatory and National lithoteque (the latter as part of the National Museum of Geology) 4. University of Bucharest - Paleomagnetic Laboratory After national dissemination of EPOS initiative other Research Institutes and companies from the potential

  11. Platform for Distributed 3D Gaming

    Directory of Open Access Journals (Sweden)

    A. Jurgelionis

    2009-01-01

    Full Text Available Video games are typically executed on Windows platforms with DirectX API and require high performance CPUs and graphics hardware. For pervasive gaming in various environments like at home, hotels, or internet cafes, it is beneficial to run games also on mobile devices and modest performance CE devices avoiding the necessity of placing a noisy workstation in the living room or costly computers/consoles in each room of a hotel. This paper presents a new cross-platform approach for distributed 3D gaming in wired/wireless local networks. We introduce the novel system architecture and protocols used to transfer the game graphics data across the network to end devices. Simultaneous execution of video games on a central server and a novel streaming approach of the 3D graphics output to multiple end devices enable the access of games on low cost set top boxes and handheld devices that natively lack the power of executing a game with high-quality graphical output.

  12. Distributed control in the electricity infrastructure

    International Nuclear Information System (INIS)

    Kok, J.K.; Warmer, C.; Kamphuis, I.G.; Mellstrand, P.; Gustavsson, R.

    2006-01-01

    Different driving forces push the electricity production towards decentralization. As a result, the current electricity infrastructure is expected to evolve into a network of networks, in which all system parts communicate with each other and influence each other. Multiagent systems and electronic markets form an appropriate technology needed for control and coordination tasks in the future electricity network. We present the PowerMatcher, a market-based control concept for supply demand matching (SDM) in electricity networks. In a simulation study we show the ability of this approach to raise the simultaneousness of electricity production and consumption within (local) control clusters. This control concept can be applied in different business cases like reduction of imbalance costs in commercial portfolios or virtual power plant operation of distributed generators. Two PowerMatcher-based field test configurations are described, one currently in operation, one currently under construction

  13. Infrastructure and distributed learning methodology for privacy-preserving multi-centric rapid learning health care: euroCAT

    Directory of Open Access Journals (Sweden)

    Timo M. Deist

    2017-06-01

    The euroCAT infrastructure has been successfully implemented in five radiation clinics across three countries. SVM models can be learned on data distributed over all five clinics. Furthermore, the infrastructure provides a general framework to execute learning algorithms on distributed data. The ongoing expansion of the euroCAT network will facilitate machine learning in radiation oncology. The resulting access to larger datasets with sufficient variation will pave the way for generalizable prediction models and personalized medicine.

  14. APhoRISM FP7 project: the Multi-platform volcanic Ash Cloud Estimation (MACE) infrastructure

    Science.gov (United States)

    Merucci, Luca; Corradini, Stefano; Bignami, Christian; Stramondo, Salvatore

    2014-05-01

    APHORISM is an FP7 project that aims to develop innovative products to support the management and mitigation of the volcanic and the seismic crisis. Satellite and ground measurements will be managed in a novel manner to provide new and improved products in terms of accuracy and quality of information. The Multi-platform volcanic Ash Cloud Estimation (MACE) infrastructure will exploit the complementarity between geostationary, and polar satellite sensors and ground measurements to improve the ash detection and retrieval and to fully characterize the volcanic ash clouds from source to the atmosphere. The basic idea behind the proposed method consists to manage in a novel manner, the volcanic ash retrievals at the space-time scale of typical geostationary observations using both the polar satellite estimations and in-situ measurements. The typical ash thermal infrared (TIR) retrieval will be integrated by using a wider spectral range from visible (VIS) to microwave (MW) and the ash detection will be extended also in case of cloudy atmosphere or steam plumes. All the MACE ash products will be tested on three recent eruptions representative of different eruption styles in different clear or cloudy atmospheric conditions: Eyjafjallajokull (Iceland) 2010, Grimsvotn (Iceland) 2011 and Etna (Italy) 2011-2012. The MACE infrastructure will be suitable to be implemented in the next generation of ESA Sentinels satellite missions.

  15. Aluminium in Infrastructures

    NARCIS (Netherlands)

    Maljaars, J.

    2016-01-01

    Aluminium alloys are used in infrastructures such as pedestrian bridges or parts of it such as handrail. This paper demonstrates that aluminium alloys are in principle also suited for heavy loaded structures, such as decks of traffic bridges and helicopter landing platforms. Recent developments in

  16. The COMET Sleep Research Platform.

    Science.gov (United States)

    Nichols, Deborah A; DeSalvo, Steven; Miller, Richard A; Jónsson, Darrell; Griffin, Kara S; Hyde, Pamela R; Walsh, James K; Kushida, Clete A

    2014-01-01

    The Comparative Outcomes Management with Electronic Data Technology (COMET) platform is extensible and designed for facilitating multicenter electronic clinical research. Our research goals were the following: (1) to conduct a comparative effectiveness trial (CET) for two obstructive sleep apnea treatments-positive airway pressure versus oral appliance therapy; and (2) to establish a new electronic network infrastructure that would support this study and other clinical research studies. The COMET platform was created to satisfy the needs of CET with a focus on creating a platform that provides comprehensive toolsets, multisite collaboration, and end-to-end data management. The platform also provides medical researchers the ability to visualize and interpret data using business intelligence (BI) tools. COMET is a research platform that is scalable and extensible, and which, in a future version, can accommodate big data sets and enable efficient and effective research across multiple studies and medical specialties. The COMET platform components were designed for an eventual move to a cloud computing infrastructure that enhances sustainability, overall cost effectiveness, and return on investment.

  17. A cyber infrastructure for the SKA Telescope Manager

    Science.gov (United States)

    Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul

    2016-07-01

    The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.

  18. Optimization of traffic distribution control in software-configurable infrastructure of virtual data center based on a simulation model

    Directory of Open Access Journals (Sweden)

    I. P. Bolodurina

    2017-01-01

    Full Text Available Currently, the proportion of use of cloud computing technology in today's business processes of companies is growing steadily. Despite the fact that it allows you to reduce the cost of ownership and operation of IT infrastructure, there are a number of problems related to the control of data centers. One such problem is the efficiency of the use of available companies compute and network resources. One of the directions of optimization is the process of traffic control of cloud applications and services in data centers. Given the multi-tier architecture of modern data center, this problem does not quite trivial. The advantage of modern virtual infrastructure is the ability to use software-configurable networks and software-configurable data storages. However, existing solutions with algorithmic optimization does not take into account a number of features forming network traffic with multiple classes of applications. Within the framework of the exploration solved the problem of optimizing the distribution of traffic cloud applications and services for the software-controlled virtual data center infrastructure. A simulation model describing the traffic in data center and software-configurable network segments involved in the processing of user requests for applications and services located network environment that includes a heterogeneous cloud platform and software-configurable data storages. The developed model has allowed to implement cloud applications traffic management algorithm and optimize access to the storage system through the effective use of the channel for data transmission. In experimental studies found that the application of the developed algorithm can reduce the response time of cloud applications and services, and as a result improve the performance of processing user requests and to reduce the number of failures.

  19. IMPLEMENTATION OF CLOUD COMPUTING AS A COMPONENT OF THE UNIVERSITY IT INFRASTRUCTURE

    Directory of Open Access Journals (Sweden)

    Vasyl P. Oleksyuk

    2014-05-01

    Full Text Available The article investigated the concept of IT infrastructure of higher educational institution. The article described models of deploying of cloud technologies in IT infrastructure. The hybrid model is most recent for higher educational institution. The unified authentication is an important component of IT infrastructure. The author suggests the public (Google Apps, Office 365 and private (Cloudstack, Eucalyptus, OpenStack cloud platforms to deploying in IT infrastructure of higher educational institution. Open source platform for organizing enterprise clouds were analyzed by the author. The article describes the experience of the deployment enterprise cloud in IT infrastructure of Department of Physics and Mathematics of Ternopil V. Hnatyuk National Pedagogical University.

  20. NiftyNet: a deep-learning platform for medical imaging.

    Science.gov (United States)

    Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom

    2018-05-01

    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new

  1. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    Science.gov (United States)

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  2. National platform electromobility. Interim report of the working group 3 Load infrastructure and rid integration; Nationale Plattform Elektromobilitaet. Zwischenbericht der Arbeitsgruppe 3 Lade-Infrastruktur und Netzintegration

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Stefan [E.ON AG, Duesseldorf (Germany). Political Affairs and Communications Energy Mix, Environment, Efficiency; Ledwon, Martin [Siemens AG, Berlin (Germany). Government Affairs

    2010-07-01

    The contribution under consideration reports on the first intermediate results of the working group 3 ''Load infrastructure and grid integration'' of the national platform electromobility. Next to the representation of the general objective of this working group, the following aspects are considered: (a) Electromobility in the field of tension between the power supply system and renewable production; (b) Possible network loads due to the integration of electrically powered vehicles; (c) Requirements concerning the load infrastructure; (d) Technology development of the load point; (e) Potentials by the integration of electrical mobiles into the Smart Grid; (f) Research and Development roadmap. This contribution finishes with a presentation of a concrete conversion plan for the demand of infrastructure.

  3. A prototype Infrastructure for Cloud-based distributed services in High Availability over WAN

    International Nuclear Information System (INIS)

    Bulfon, C.; De Salvo, A.; Graziosi, C.; Carlino, G.; Doria, A; Pardi, S; Sanchez, A.; Carboni, M; Bolletta, P; Puccio, L.; Capone, V; Merola, L

    2015-01-01

    In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults.The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN. (paper)

  4. A prototype Infrastructure for Cloud-based distributed services in High Availability over WAN

    Science.gov (United States)

    Bulfon, C.; Carlino, G.; De Salvo, A.; Doria, A.; Graziosi, C.; Pardi, S.; Sanchez, A.; Carboni, M.; Bolletta, P.; Puccio, L.; Capone, V.; Merola, L.

    2015-12-01

    In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults. The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN.

  5. Open Polar Server (OPS—An Open Source Infrastructure for the Cryosphere Community

    Directory of Open Access Journals (Sweden)

    Weibo Liu

    2016-03-01

    Full Text Available The Center for Remote Sensing of Ice Sheets (CReSIS at the University of Kansas has collected approximately 1000 terabytes (TB of radar depth sounding data over the Arctic and Antarctic ice sheets since 1993 in an effort to map the thickness of the ice sheets and ultimately understand the impacts of climate change and sea level rise. In addition to data collection, the storage, management, and public distribution of the dataset are also primary roles of the CReSIS. The Open Polar Server (OPS project developed a free and open source infrastructure to store, manage, analyze, and distribute the data collected by CReSIS in an effort to replace its current data storage and distribution approach. The OPS infrastructure includes a spatial database management system (DBMS, map and web server, JavaScript geoportal, and MATLAB application programming interface (API for the inclusion of data created by the cryosphere community. Open source software including GeoServer, PostgreSQL, PostGIS, OpenLayers, ExtJS, GeoEXT and others are used to build a system that modernizes the CReSIS data distribution for the entire cryosphere community and creates a flexible platform for future development. Usability analysis demonstrates the OPS infrastructure provides an improved end user experience. In addition, interpolating glacier topography is provided as an application example of the system.

  6. JACoW A new distributed control system for the consolidation of the CERN tertiary infrastructures

    CERN Document Server

    Scibile, Luigi; Villeton Pachot, Patrick

    2018-01-01

    The operation of the CERN tertiary infrastructures is carried out via a series of control systems distributed over the CERN sites (Meyrin and Prevessin). The scope comprises: $\\sim$ 260 buildings, 2 large heating plants ($\\sim$ 50 MW overall capacity) with 27 km heating network and 200 radiators circuits, $\\sim$ 500 air handling units, $\\sim$ 52 chillers, $\\sim$ 300 split systems, $\\sim$ 3000 electric boards and $\\sim$ 100k light points. In the last five years and with the launch of major tertiary infrastructure consolidations, CERN is carrying out a migration and an extension of the old control systems dated back to the 70's, 80's and 90's to a new simplified, yet innovative, distributed control system aimed at minimizing the programming and implementation effort, standardizing equipment and methods and reducing lifecycle costs. This new methodology allows for a rapid development and simplified integration of the new controlled building/infrastructure processes. The basic principle is based on open standards...

  7. Monitoring performance of a highly distributed and complex computing infrastructure in LHCb

    Science.gov (United States)

    Mathe, Z.; Haen, C.; Stagni, F.

    2017-10-01

    In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time series of a large number of observables, for which the usage of SQL relational databases is far from optimal. Therefore within DIRAC we have been studying novel possibilities based on NoSQL databases (ElasticSearch, OpenTSDB and InfluxDB) as a result of this study we developed a new monitoring system based on ElasticSearch. It has been deployed on the LHCb Distributed Computing infrastructure for which it collects data from all the components (agents, services, jobs) and allows creating reports through Kibana and a web user interface, which is based on the DIRAC web framework. In this paper we describe this new implementation of the DIRAC monitoring system. We give details on the ElasticSearch implementation within the DIRAC general framework, as well as an overview of the advantages of the pipeline aggregation used for creating a dynamic bucketing of the time series. We present the advantages of using the ElasticSearch DSL high-level library for creating and running queries. Finally we shall present the performances of that system.

  8. Using Cloud Services for Library IT Infrastructure

    OpenAIRE

    Erik Mitchell

    2010-01-01

    Cloud computing comes in several different forms and this article documents how service, platform, and infrastructure forms of cloud computing have been used to serve library needs. Following an overview of these uses the article discusses the experience of one library in migrating IT infrastructure to a cloud environment and concludes with a model for assessing cloud computing.

  9. Modelling Reliability of Supply and Infrastructural Dependency in Energy Distribution Systems

    OpenAIRE

    Helseth, Arild

    2008-01-01

    This thesis presents methods and models for assessing reliability of supply and infrastructural dependency in energy distribution systems with multiple energy carriers. The three energy carriers of electric power, natural gas and district heating are considered. Models and methods for assessing reliability of supply in electric power systems are well documented, frequently applied in the industry and continuously being subject to research and improvement. On the contrary, there are compar...

  10. Getting to Gender Equality in Energy Infrastructure : Lessons from Electricity Generation, Transmission, and Distribution Projects

    OpenAIRE

    Orlando, Maria Beatriz; Janik, Vanessa Lopes; Vaidya, Pranav; Angelou, Nicolina; Zumbyte, Ieva; Adams, Norma

    2018-01-01

    Getting to Gender Equality in Electricity Infrastructure: Lessons from Electricity Generation, Transmission, and Distribution Projects examines the social and gender footprint of large-scale electricity generation, transmission, and distribution projects to establish a foundation on which further research and replication of good practices can be built. The main impact pathways analyzed are...

  11. The Global Sensor Web: A Platform for Citizen Science (Invited)

    Science.gov (United States)

    Simons, A. L.

    2013-12-01

    The Global Sensor Web (GSW) is an effort to provide an infrastructure for the collection, sharing and visualizing sensor data from around the world. Over the past three years the GSW has been developed and tested as a standardized platform for citizen science. The most developed of the citizen science projects built onto the GSW has been Distributed Electronic Cosmic-ray Observatory (DECO), which is an Android application designed to harness a global network of mobile devices, to detect the origin and behavior of the cosmic radiation. Other projects which can be readily built on top of GSW as a platform are also discussed. A cosmic-ray track candidate captured on a cell phone camera.

  12. Next-generation navigational infrastructure and the ATLAS event store

    International Nuclear Information System (INIS)

    Gemmeren, P van; Malon, D; Nowak, M

    2014-01-01

    The ATLAS event store employs a persistence framework with extensive navigational capabilities. These include real-time back navigation to upstream processing stages, externalizable data object references, navigation from any data object to any other both within a single file and across files, and more. The 2013-2014 shutdown of the Large Hadron Collider provides an opportunity to enhance this infrastructure in several ways that both extend these capabilities and allow the collaboration to better exploit emerging computing platforms. Enhancements include redesign with efficient file merging in mind, content-based indices in optimized reference types, and support for forward references. The latter provide the potential to construct valid references to data before those data are written, a capability that is useful in a variety of multithreading, multiprocessing, distributed processing, and deferred processing scenarios. This paper describes the architecture and design of the next generation of ATLAS navigational infrastructure.

  13. Logistic platforms and transfer poles: between conformation and utility

    Directory of Open Access Journals (Sweden)

    Catrinel Elena Cotae

    2013-09-01

    Full Text Available Considering the current trend in sustainable regional development, transfer nodes and logistic platforms represent an essential component of the transport systems, while playing an important part in the distribution of merchandise at a global level and having a binding role in the current urban and regional system. At the same time, these are relevant segments for a functional regional economy. Moreover, optimizing the transport systems for freight distribution, will allow further regional and territorial development. Any changes implemented in the existing transport systems will most likely affect the companies that take advantage of that infrastructure, generating either growth or shrinkage in the local GDP.

  14. Design and Implementation of Cloud Platform for Intelligent Logistics in the Trend of Intellectualization

    Institute of Scientific and Technical Information of China (English)

    Mengke Yang; Movahedipour Mahmood; Xiaoguang Zhou; Salam Shafaq; Latif Zahid

    2017-01-01

    Intellectualization has become a new trend for telecom industry, driven by in-telligent technology including cloud comput-ing, big data, and Internet of things. In order to satisfy the service demand of intelligent logistics, this paper designed an intelligent logistics platform containing the main ap-plications such as e-commerce, self-service transceiver, big data analysis, path location and distribution optimization. The intelligent logistics service platform has been built based on cloud computing to collect, store and han-dling multi-source heterogeneous mass data from sensors, RFID electronic tag, vehicle ter-minals and APP, so that the open-access cloud services including distribution, positioning, navigation, scheduling and other data services can be provided for the logistics distribution applications. And then the architecture of in-telligent logistics cloud platform containing software layer (SaaS), platform layer (PaaS) and infrastructure (IaaS) has been constructed accordance with the core technology relative high concurrent processing technique, hetero-geneous terminal data access, encapsulation and data mining. Therefore, intelligent logis-tics cloud platform can be carried out by the service mode for implementation to accelerate the construction of the symbiotic win-win logistics ecological system and the benign de-velopment of the ICT industry in the trend of intellectualization in China.

  15. Implementation of a Real-Time Microgrid Simulation Platform Based on Centralized and Distributed Management

    Directory of Open Access Journals (Sweden)

    Omid Abrishambaf

    2017-06-01

    Full Text Available Demand response and distributed generation are key components of power systems. Several challenges are raised at both technical and business model levels for integration of those resources in smart grids and microgrids. The implementation of a distribution network as a test bed can be difficult and not cost-effective; using computational modeling is not sufficient for producing realistic results. Real-time simulation allows us to validate the business model’s impact at the technical level. This paper comprises a platform supporting the real-time simulation of a microgrid connected to a larger distribution network. The implemented platform allows us to use both centralized and distributed energy resource management. Using an optimization model for the energy resource operation, a virtual power player manages all the available resources. Then, the simulation platform allows us to technically validate the actual implementation of the requested demand reduction in the scope of demand response programs. The case study has 33 buses, 220 consumers, and 68 distributed generators. It demonstrates the impact of demand response events, also performing resource management in the presence of an energy shortage.

  16. WRF4G project: Adaptation of WRF Model to Distributed Computing Infrastructures

    Science.gov (United States)

    Cofino, Antonio S.; Fernández Quiruelas, Valvanuz; García Díez, Markel; Blanco Real, Jose C.; Fernández, Jesús

    2013-04-01

    Nowadays Grid Computing is powerful computational tool which is ready to be used for scientific community in different areas (such as biomedicine, astrophysics, climate, etc.). However, the use of this distributed computing infrastructures (DCI) is not yet common practice in climate research, and only a few teams and applications in this area take advantage of this infrastructure. Thus, the first objective of this project is to popularize the use of this technology in the atmospheric sciences area. In order to achieve this objective, one of the most used applications has been taken (WRF; a limited- area model, successor of the MM5 model), that has a user community formed by more than 8000 researchers worldwide. This community develop its research activity on different areas and could benefit from the advantages of Grid resources (case study simulations, regional hind-cast/forecast, sensitivity studies, etc.). The WRF model is been used as input by many energy and natural hazards community, therefore those community will also benefit. However, Grid infrastructures have some drawbacks for the execution of applications that make an intensive use of CPU and memory for a long period of time. This makes necessary to develop a specific framework (middleware). This middleware encapsulates the application and provides appropriate services for the monitoring and management of the jobs and the data. Thus, the second objective of the project consists on the development of a generic adaptation of WRF for Grid (WRF4G), to be distributed as open-source and to be integrated in the official WRF development cycle. The use of this WRF adaptation should be transparent and useful to face any of the previously described studies, and avoid any of the problems of the Grid infrastructure. Moreover it should simplify the access to the Grid infrastructures for the research teams, and also to free them from the technical and computational aspects of the use of the Grid. Finally, in order to

  17. Towards the Smart World. Smart Platform: Infrastructure and Analytics

    CSIR Research Space (South Africa)

    Velthausz, D

    2012-10-01

    Full Text Available In this presentation the author outlines the 'smart world' concept and how technology (smart infrastructure, analytics) can foster smarter cities, smarter regions and a smarter world....

  18. Development Model for Research Infrastructures

    Science.gov (United States)

    Wächter, Joachim; Hammitzsch, Martin; Kerschke, Dorit; Lauterjung, Jörn

    2015-04-01

    Research infrastructures (RIs) are platforms integrating facilities, resources and services used by the research communities to conduct research and foster innovation. RIs include scientific equipment, e.g., sensor platforms, satellites or other instruments, but also scientific data, sample repositories or archives. E-infrastructures on the other hand provide the technological substratum and middleware to interlink distributed RI components with computing systems and communication networks. The resulting platforms provide the foundation for the design and implementation of RIs and play an increasing role in the advancement and exploitation of knowledge and technology. RIs are regarded as essential to achieve and maintain excellence in research and innovation crucial for the European Research Area (ERA). The implementation of RIs has to be considered as a long-term, complex development process often over a period of 10 or more years. The ongoing construction of Spatial Data Infrastructures (SDIs) provides a good example for the general complexity of infrastructure development processes especially in system-of-systems environments. A set of directives issued by the European Commission provided a framework of guidelines for the implementation processes addressing the relevant content and the encoding of data as well as the standards for service interfaces and the integration of these services into networks. Additionally, a time schedule for the overall construction process has been specified. As a result this process advances with a strong participation of member states and responsible organisations. Today, SDIs provide the operational basis for new digital business processes in both national and local authorities. Currently, the development of integrated RIs in Earth and Environmental Sciences is characterised by the following properties: • A high number of parallel activities on European and national levels with numerous institutes and organisations participating

  19. Biodiversity information platforms: From standards to interoperability

    Directory of Open Access Journals (Sweden)

    Walter Berendsohn

    2011-11-01

    Full Text Available One of the most serious bottlenecks in the scientific workflows of biodiversity sciences is the need to integrate data from different sources, software applications, and services for analysis, visualisation and publication. For more than a quarter of a century the TDWG Biodiversity Information Standards organisation has a central role in defining and promoting data standards and protocols supporting interoperability between disparate and locally distributed systems. Although often not sufficiently recognized, TDWG standards are the foundation of many popular Biodiversity Informatics applications and infrastructures ranging from small desktop software solutions to large scale international data networks. However, individual scientists and groups of collaborating scientist have difficulties in fully exploiting the potential of standards that are often notoriously complex, lack non-technical documentations, and use different representations and underlying technologies. In the last few years, a series of initiatives such as Scratchpads, the EDIT Platform for Cybertaxonomy, and biowikifarm have started to implement and set up virtual work platforms for biodiversity sciences which shield their users from the complexity of the underlying standards. Apart from being practical work-horses for numerous working processes related to biodiversity sciences, they can be seen as information brokers mediating information between multiple data standards and protocols. The ViBRANT project will further strengthen the flexibility and power of virtual biodiversity working platforms by building software interfaces between them, thus facilitating essential information flows needed for comprehensive data exchange, data indexing, web-publication, and versioning. This work will make an important contribution to the shaping of an international, interoperable, and user-oriented biodiversity information infrastructure.

  20. Advanced e-Infrastructures for Civil Protection applications: the CYCLOPS Project

    Science.gov (United States)

    Mazzetti, P.; Nativi, S.; Verlato, M.; Ayral, P. A.; Fiorucci, P.; Pina, A.; Oliveira, J.; Sorani, R.

    2009-04-01

    During the full cycle of the emergency management, Civil Protection operative procedures involve many actors belonging to several institutions (civil protection agencies, public administrations, research centers, etc.) playing different roles (decision-makers, data and service providers, emergency squads, etc.). In this context the sharing of information is a vital requirement to make correct and effective decisions. Therefore a European-wide technological infrastructure providing a distributed and coordinated access to different kinds of resources (data, information, services, expertise, etc.) could enhance existing Civil Protection applications and even enable new ones. Such European Civil Protection e-Infrastructure should be designed taking into account the specific requirements of Civil Protection applications and the state-of-the-art in the scientific and technological disciplines which could make the emergency management more effective. In the recent years Grid technologies have reached a mature state providing a platform for secure and coordinated resource sharing between the participants collected in the so-called Virtual Organizations. Moreover the Earth and Space Sciences Informatics provide the conceptual tools for modeling the geospatial information shared in Civil Protection applications during its entire lifecycle. Therefore a European Civil Protection e-infrastructure might be based on a Grid platform enhanced with Earth Sciences services. In the context of the 6th Framework Programme the EU co-funded Project CYCLOPS (CYber-infrastructure for CiviL protection Operative ProcedureS), ended in December 2008, has addressed the problem of defining the requirements and identifying the research strategies and innovation guidelines towards an advanced e-Infrastructure for Civil Protection. Starting from the requirement analysis CYCLOPS has proposed an architectural framework for a European Civil Protection e-Infrastructure. This architectural framework has

  1. Towards a Unified Global ICT Infrastructure

    DEFF Research Database (Denmark)

    Madsen, Ole Brun

    2006-01-01

    A successful evolution towards a unified global WAN platform allowing for the coexistence and interoperability of all kind of services requires careful planning of the next generation global cooperative wired and wireless information infrastructure. The absence of commonly agreed upon and adopted...... to be solved can be found in the interrelation between communication, connectivity and convergence. This paper will focus on steps to be taken in planning the physical infrastructure as a prerequisite for a successful evolution....

  2. Second report of the national platform electromobility. Appendix; Zweiter Bericht der Nationalen Plattform Elektromobilitaet. Anhang

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-05-15

    In the National Platform Electromobility, representatives from industry, science, politics, unions and societies in Germany agreed on a systematic, market-oriented and technology-open approach to become the main supplier and main market for electromobility up to the year 2020. Within the appendix of the second report to the National Platform Electromobility the following aspects are considered: (1) Content and criteria for the identification and evaluation of the emphases of flagship projects; (2) Flagship projects of the national conference on education; (3) Review on standardization activities; (4) A consistent design of public charging infrastructure; (5) Saving the distribution and financing of the public charging infrastructure; (6) Costs and payment reserves vehicle; (7) Criteria of tenders and awarding showcase projects; (8) Innovation promotion in Germany for renewable electromobility (I.D.E.E.) - standpoint of the WWF, climate alliance of the local authorities and the federal association renewable energies.

  3. Helix Nebula: Enabling federation of existing data infrastructures and data services to an overarching cross-domain e-infrastructure

    Science.gov (United States)

    Lengert, Wolfgang; Farres, Jordi; Lanari, Riccardo; Casu, Francesco; Manunta, Michele; Lassalle-Balier, Gerard

    2014-05-01

    Helix Nebula has established a growing public private partnership of more than 30 commercial cloud providers, SMEs, and publicly funded research organisations and e-infrastructures. The Helix Nebula strategy is to establish a federated cloud service across Europe. Three high-profile flagships, sponsored by CERN (high energy physics), EMBL (life sciences) and ESA/DLR/CNES/CNR (earth science), have been deployed and extensively tested within this federated environment. The commitments behind these initial flagships have created a critical mass that attracts suppliers and users to the initiative, to work together towards an "Information as a Service" market place. Significant progress in implementing the following 4 programmatic goals (as outlined in the strategic Plan Ref.1) has been achieved: × Goal #1 Establish a Cloud Computing Infrastructure for the European Research Area (ERA) serving as a platform for innovation and evolution of the overall infrastructure. × Goal #2 Identify and adopt suitable policies for trust, security and privacy on a European-level can be provided by the European Cloud Computing framework and infrastructure. × Goal #3 Create a light-weight governance structure for the future European Cloud Computing Infrastructure that involves all the stakeholders and can evolve over time as the infrastructure, services and user-base grows. × Goal #4 Define a funding scheme involving the three stake-holder groups (service suppliers, users, EC and national funding agencies) into a Public-Private-Partnership model to implement a Cloud Computing Infrastructure that delivers a sustainable business environment adhering to European level policies. Now in 2014 a first version of this generic cross-domain e-infrastructure is ready to go into operations building on federation of European industry and contributors (data, tools, knowledge, ...). This presentation describes how Helix Nebula is being used in the domain of earth science focusing on geohazards. The

  4. Abstracting application deployment on Cloud infrastructures

    Science.gov (United States)

    Aiftimiei, D. C.; Fattibene, E.; Gargana, R.; Panella, M.; Salomoni, D.

    2017-10-01

    Deploying a complex application on a Cloud-based infrastructure can be a challenging task. In this contribution we present an approach for Cloud-based deployment of applications and its present or future implementation in the framework of several projects, such as “!CHAOS: a cloud of controls” [1], a project funded by MIUR (Italian Ministry of Research and Education) to create a Cloud-based deployment of a control system and data acquisition framework, “INDIGO-DataCloud” [2], an EC H2020 project targeting among other things high-level deployment of applications on hybrid Clouds, and “Open City Platform”[3], an Italian project aiming to provide open Cloud solutions for Italian Public Administrations. We considered to use an orchestration service to hide the complex deployment of the application components, and to build an abstraction layer on top of the orchestration one. Through Heat [4] orchestration service, we prototyped a dynamic, on-demand, scalable platform of software components, based on OpenStack infrastructures. On top of the orchestration service we developed a prototype of a web interface exploiting the Heat APIs. The user can start an instance of the application without having knowledge about the underlying Cloud infrastructure and services. Moreover, the platform instance can be customized by choosing parameters related to the application such as the size of a File System or the number of instances of a NoSQL DB cluster. As soon as the desired platform is running, the web interface offers the possibility to scale some infrastructure components. In this contribution we describe the solution design and implementation, based on the application requirements, the details of the development of both the Heat templates and of the web interface, together with possible exploitation strategies of this work in Cloud data centers.

  5. Space and Ground-Based Infrastructures

    Science.gov (United States)

    Weems, Jon; Zell, Martin

    This chapter deals first with the main characteristics of the space environment, outside and inside a spacecraft. Then the space and space-related (ground-based) infrastructures are described. The most important infrastructure is the International Space Station, which holds many European facilities (for instance the European Columbus Laboratory). Some of them, such as the Columbus External Payload Facility, are located outside the ISS to benefit from external space conditions. There is only one other example of orbital platforms, the Russian Foton/Bion Recoverable Orbital Capsule. In contrast, non-orbital weightless research platforms, although limited in experimental time, are more numerous: sounding rockets, parabolic flight aircraft, drop towers and high-altitude balloons. In addition to these facilities, there are a number of ground-based facilities and space simulators, for both life sciences (for instance: bed rest, clinostats) and physical sciences (for instance: magnetic compensation of gravity). Hypergravity can also be provided by human and non-human centrifuges.

  6. gLibrary/DRI: A grid-based platform to host multiple repositories for digital content

    International Nuclear Information System (INIS)

    Calanducci, A.; Gonzalez Martin, J. M.; Ramos Pollan, R.; Rubio del Solar, M.; Tcaci, S.

    2007-01-01

    In this work we present the gLibrary/DRI (Digital Repositories Infrastructure) platform. gLibrary/DRI extends gLibrary, a system with a easy-to-use web front-end designed to save and organize multimedia assets on Grid-based storage resources. The main goal of the extended platform is to reduce the cost in terms of time and effort that a repository provider spends to get its repository deployed. This is achieved by providing a common infrastructure and a set of mechanisms (APIs and specifications) that the repository providers use to define the data model, the access to the content (by navigation trees and filters) and the storage model. DRI offers a generic way to provide all this functionality; nevertheless the providers can add specific behaviours to the default functions for their repositories. The architecture is Grid based (VO system, data federation and distribution, computing power, etc). A working example based on a mammograms repository is also presented. (Author)

  7. Enhancing Trusted Cloud Computing Platform for Infrastructure as a Service

    Directory of Open Access Journals (Sweden)

    KIM, H.

    2017-02-01

    Full Text Available The characteristics of cloud computing including on-demand self-service, resource pooling, and rapid elasticity have made it grow in popularity. However, security concerns still obstruct widespread adoption of cloud computing in the industry. Especially, security risks related to virtual machine make cloud users worry about exposure of their private data in IaaS environment. In this paper, we propose an enhanced trusted cloud computing platform to provide confidentiality and integrity of the user's data and computation. The presented platform provides secure and efficient virtual machine management protocols not only to protect against eavesdropping and tampering during transfer but also to guarantee the virtual machine is hosted only on the trusted cloud nodes against inside attackers. The protocols utilize both symmetric key operations and public key operations together with efficient node authentication model, hence both the computational cost for cryptographic operations and the communication steps are significantly reduced. As a result, the simulation shows the performance of the proposed platform is approximately doubled compared to the previous platforms. The proposed platform eliminates cloud users' worry above by providing confidentiality and integrity of their private data with better performance, and thus it contributes to wider industry adoption of cloud computing.

  8. PRINCIPLES OF MODERN UNIVERSITY "ACADEMIC CLOUD" FORMATION BASED ON OPEN SOFTWARE PLATFORM

    Directory of Open Access Journals (Sweden)

    Olena H. Hlazunova

    2014-09-01

    Full Text Available In the article approaches to the use of cloud technology in teaching of higher education students are analyzed. The essence of the concept of "academic cloud" and its structural elements are justified. The model of academic clouds of the modern university, which operates on the basis of open software platforms, are proposed. Examples of functional software and platforms, that provide the needs of students in e-learning resources, are given. The models of deployment Cloud-oriented environment in higher education: private cloud, infrastructure as a service and platform as a service, are analyzed. The comparison of the cost of deployment "academic cloud" based on its own infrastructure of the institution and lease infrastructure vendor are substantiated.

  9. German electric vehicle charging infrastructure: statistically based approach to derive the demand and geographical distribution of charging points

    OpenAIRE

    González Villafranca, Sara

    2013-01-01

    Electromobility is widely seen as one of the most promising options to reduce Greenhouse gas emissions in passenger transport. In accordance with the German Government via the National Platform for Electromobility (NPE), an estimated target of 1 million of electric vehicles for 2020 is expected for Germany. One challenge for the widespread development of electric vehicles market is the lack of infrastructure. The great unknowns here are: how many charging stations will be needed in the future...

  10. New Features in the Computational Infrastructure for Nuclear Astrophysics

    International Nuclear Information System (INIS)

    Smith, Michael Scott; Lingerfelt, Eric; Scott, J. P.; Nesaraja, Caroline D; Chae, Kyung YuK.; Koura, Hiroyuki; Roberts, Luke F.; Hix, William Raphael; Bardayan, Daniel W.; Blackmon, Jeff C.

    2006-01-01

    A Computational Infrastructure for Nuclear Astrophysics has been developed to streamline the inclusion of the latest nuclear physics data in astrophysics simulations. The infrastructure consists of a platform-independent suite of computer codes that are freely available online at http://nucastrodata.org. The newest features of, and future plans for, this software suite are given

  11. Internet of things platforms in support of smart cities infrastructures

    CSIR Research Space (South Africa)

    Dlodlo, N

    2013-09-01

    Full Text Available the real and virtual worlds. This has been made possible through the development of IoT platforms. A city is referred to as ‘smart’ if it integrates smart objects into its products and services. The challenge is to integrate IoT platforms into the smart...

  12. Data Publishing Services in a Scientific Project Platform

    Science.gov (United States)

    Schroeder, Matthias; Stender, Vivien; Wächter, Joachim

    2014-05-01

    Data-intensive science lives from data. More and more interdisciplinary projects are aligned to mutually gain access to their data, models and results. In order to achieving this, an umbrella project GLUES is established in the context of the "Sustainable Land Management" (LAMA) initiative funded by the German Federal Ministry of Education and Research (BMBF). The GLUES (Global Assessment of Land Use Dynamics, Greenhouse Gas Emissions and Ecosystem Services) project supports several different regional projects of the LAMA initiative: Within the framework of GLUES a Spatial Data Infrastructure (SDI) is implemented to facilitate publishing, sharing and maintenance of distributed global and regional scientific data sets as well as model results. The GLUES SDI supports several OGC webservices like the Catalog Service Web (CSW) which enables it to harvest data from varying regional projects. One of these regional projects is SuMaRiO (Sustainable Management of River Oases along the Tarim River) which aims to support oasis management along the Tarim River (PR China) under conditions of climatic and societal changes. SuMaRiO itself is an interdisciplinary and spatially distributed project. Working groups from twelve German institutes and universities are collecting data and driving their research in disciplines like Hydrology, Remote Sensing, and Agricultural Sciences among others. Each working group is dependent on the results of another working group. Due to the spatial distribution of participating institutes the data distribution is solved by using the eSciDoc infrastructure at the German Research Centre for Geosciences (GFZ). Further, the metadata based data exchange platform PanMetaDocs will be used by participants collaborative. PanMetaDocs supports an OAI-PMH interface which enables an Open Source metadata portal like GeoNetwork to harvest the information. The data added in PanMetaDocs can be labeled with a DOI (Digital Object Identifier) to publish the data and to

  13. Pro Smartphone Cross-Platform Development IPhone, Blackberry, Windows Mobile, and Android Development and Distribution

    CERN Document Server

    Allen, Sarah; Lundrigan, Lee

    2010-01-01

    Learn the theory behind cross-platform development, and put the theory into practice with code using the invaluable information presented in this book. With in-depth coverage of development and distribution techniques for iPhone, BlackBerry, Windows Mobile, and Android, you'll learn the native approach to working with each of these platforms. With detailed coverage of emerging frameworks like PhoneGap and Rhomobile, you'll learn the art of creating applications that will run across all devices. You'll also be introduced to the code-signing process and the distribution of applications through t

  14. PACS infrastructure supporting e-learning

    Energy Technology Data Exchange (ETDEWEB)

    Mildenberger, Peter, E-mail: milden@radiologie.klinik.uni-mainz.de [University Medicine Mainz, Johannes Gutenberg-University Mainz, Langenbeckstr 1, Mainz (Germany); Brueggemann, Kerstin; Roesner, Freya; Koch, Katja; Ahlers, Christopher [University Medicine Mainz, Johannes Gutenberg-University Mainz, Langenbeckstr 1, Mainz (Germany)

    2011-05-15

    Digital imaging is becoming predominant in radiology. This has implications for teaching support, because conventional film-based concepts are now obsolete. The IHE Teaching File and Clinical Study Export (TCE) profile provides an excellent platform to enhance PACS infrastructure with educational functionality. This can be supplemented with dedicated e-learning tools.

  15. PACS infrastructure supporting e-learning

    International Nuclear Information System (INIS)

    Mildenberger, Peter; Brueggemann, Kerstin; Roesner, Freya; Koch, Katja; Ahlers, Christopher

    2011-01-01

    Digital imaging is becoming predominant in radiology. This has implications for teaching support, because conventional film-based concepts are now obsolete. The IHE Teaching File and Clinical Study Export (TCE) profile provides an excellent platform to enhance PACS infrastructure with educational functionality. This can be supplemented with dedicated e-learning tools.

  16. CMS distributed analysis infrastructure and operations: experience with the first LHC data

    International Nuclear Information System (INIS)

    Vaandering, E W

    2011-01-01

    The CMS distributed analysis infrastructure represents a heterogeneous pool of resources distributed across several continents. The resources are harnessed using glite and glidein-based work load management systems (WMS). We provide the operational experience of the analysis workflows using CRAB-based servers interfaced with the underlying WMS. The automatized interaction of the server with the WMS provides a successful analysis workflow. We present the operational experience as well as methods used in CMS to analyze the LHC data. The interaction with CMS Run-registry for Run and luminosity block selections via CRAB is discussed. The variations of different workflows during the LHC data-taking period and the lessons drawn from this experience are also outlined.

  17. Sustainable support for WLCG through the EGI distributed infrastructure

    International Nuclear Information System (INIS)

    Antoni, Torsten; Bozic, Stefan; Reisser, Sabine

    2011-01-01

    Grid computing is now in a transition phase from development in research projects to routine usage in a sustainable infrastructure. This is mirrored in Europe by the transition from the series of EGEE projects to the European Grid Initiative (EGI). EGI aims at establishing a self-sustained grid infrastructure across Europe. The main building blocks of EGI are the national grid initiatives in the participating countries and a central coordinating institution (EGI.eu). The middleware used is provided by consortia outside of EGI. Also the user communities are organized separately from EGI. The transition to a self-sustained grid infrastructure is aided by the EGI-InSPIRE project, aiming at reducing the project-funding needed to run EGI over the course of its four year duration. Providing user support in this framework poses new technical and organisational challenges as it has to cross the boundaries of various projects and infrastructures. The EGI user support infrastructure is built around the Gobal Grid User Support system (GGUS) that was also the basis of user support in EGEE. Utmost care was taken that during the transition from EGEE to EGI support services which are already used in production were not perturbed. A year into the EGI-InSPIRE project, in this paper we would like to present the current status of the user support infrastructure provided by EGI for WLCG, new features that were needed to match the new infrastructure, issues and challenges that occurred during the transition and give an outlook on future plans and developments.

  18. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    Science.gov (United States)

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.

  19. Proposed Use of the NASA Ames Nebula Cloud Computing Platform for Numerical Weather Prediction and the Distribution of High Resolution Satellite Imagery

    Science.gov (United States)

    Limaye, Ashutosh S.; Molthan, Andrew L.; Srikishen, Jayanthi

    2010-01-01

    The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the "infrastructure as a service" concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.

  20. SEE-GRID eInfrastructure for Regional eScience

    Science.gov (United States)

    Prnjat, Ognjen; Balaz, Antun; Vudragovic, Dusan; Liabotis, Ioannis; Sener, Cevat; Marovic, Branko; Kozlovszky, Miklos; Neagu, Gabriel

    In the past 6 years, a number of targeted initiatives, funded by the European Commission via its information society and RTD programmes and Greek infrastructure development actions, have articulated a successful regional development actions in South East Europe that can be used as a role model for other international developments. The SEEREN (South-East European Research and Education Networking initiative) project, through its two phases, established the SEE segment of the pan-European G ´EANT network and successfully connected the research and scientific communities in the region. Currently, the SEE-LIGHT project is working towards establishing a dark-fiber backbone that will interconnect most national Research and Education networks in the region. On the distributed computing and storage provisioning i.e. Grid plane, the SEE-GRID (South-East European GRID e-Infrastructure Development) project, similarly through its two phases, has established a strong human network in the area of scientific computing and has set up a powerful regional Grid infrastructure, and attracted a number of applications from different fields from countries throughout the South-East Europe. The current SEEGRID-SCI project, ending in April 2010, empowers the regional user communities from fields of meteorology, seismology and environmental protection in common use and sharing of the regional e-Infrastructure. Current technical initiatives in formulation are focusing on a set of coordinated actions in the area of HPC and application fields making use of HPC initiatives. Finally, the current SEERA-EI project brings together policy makers - programme managers from 10 countries in the region. The project aims to establish a communication platform between programme managers, pave the way towards common e-Infrastructure strategy and vision, and implement concrete actions for common funding of electronic infrastructures on the regional level. The regional vision on establishing an e-Infrastructure

  1. Cloud Environment Automation: from infrastructure deployment to application monitoring

    Science.gov (United States)

    Aiftimiei, C.; Costantini, A.; Bucchi, R.; Italiano, A.; Michelotto, D.; Panella, M.; Pergolesi, M.; Saletta, M.; Traldi, S.; Vistoli, C.; Zizzi, G.; Salomoni, D.

    2017-10-01

    The potential offered by the cloud paradigm is often limited by technical issues, rules and regulations. In particular, the activities related to the design and deployment of the Infrastructure as a Service (IaaS) cloud layer can be difficult to apply and time-consuming for the infrastructure maintainers. In this paper the research activity, carried out during the Open City Platform (OCP) research project [1], aimed at designing and developing an automatic tool for cloud-based IaaS deployment is presented. Open City Platform is an industrial research project funded by the Italian Ministry of University and Research (MIUR), started in 2014. It intends to research, develop and test new technological solutions open, interoperable and usable on-demand in the field of Cloud Computing, along with new sustainable organizational models that can be deployed for and adopted by the Public Administrations (PA). The presented work and the related outcomes are aimed at simplifying the deployment and maintenance of a complete IaaS cloud-based infrastructure.

  2. Towards Shibboleth-based security in the e-infrastructure for social sciences

    OpenAIRE

    Jie, Wei; Daw, Michael; Procter, Rob; Voss, Alex

    2007-01-01

    The e-Infrastructure for e-Social Sciences project leverages Grid computing technology to provide an integrated platform which enables social science researchers to securely access a variety of e-Science resources. Security underpins the e-Infrastructure and a security framework with authentication and authorization functionality is a core component of the e-Infrastructure for social sciences. To build the security framework, we adopt Shibboleth as the basic authentication and authorization i...

  3. Hybrid Terrestrial-Satellite DVB/IP Infrastructure in Overlay Constellations for Triple-Play Services Access in Rural Areas

    Directory of Open Access Journals (Sweden)

    E. Pallis

    2010-01-01

    Full Text Available This paper discusses the convergence of digital broadcasting and Internet technologies, by elaborating on the design, implementation, and performance evaluation of a hybrid terrestrial/satellite networking infrastructure, enabling triple-play services access in rural areas. At local/district level, the paper proposes the exploitation of DVB-T platforms in regenerative configurations for creating terrestrial DVB/IP backhaul between the core backbone (in urban areas and a number of intermediate communication nodes distributed within the DVB-T broadcasting footprint (in rural areas. In this way, triple play services that are available at the core backbone, are transferred via the regenerative DVB-T/IP backhaul to the entire district and can be accessed by rural users via the corresponding intermediate node. On the other hand, at regional/national level, the paper proposes the exploitation of a satellite interactive digital video broadcasting platform (DVB S2/RCS as an overlay network that interconnects the regenerative DVB-T/IP platforms, as well as individual users, and services providers, to each other. Performance of the proposed hybrid terrestrial/satellite networking environment is validated through experimental tests that were conducted under real transmission/reception conditions (for the terrestrial segment and via simulation experiments (for the satellite segment at a prototype network infrastructure.

  4. Common Ambient Assisted Living Home Platform for Seamless Care

    DEFF Research Database (Denmark)

    Wagner, Stefan Rahr; Stenner, Rene; Memon, Mukhtiar

    The CareStore project is investigating the feasibility of creating an open and flexible infrastructure for facilitating seamless deployment of assisted living devices and applications on heterogeneous platforms. The Common Ambient Assisted Living Home Platform (CAALHP) is intended to be the main ...

  5. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.

    Science.gov (United States)

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen

    2013-03-01

    Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.

  6. NTEGRAL: ICT-platform based Distributed Control in electricity grids with a large share of Distributed Energy Resources and Renewable Energy Sources

    OpenAIRE

    Peppink , G.; Kamphuis , R.; Kok , K.; Dimeas , A.; Karfopoulos , E.; Hatziargyriou , Nikos D.; Hadjsaid , Nouredine; Caire , Raphaël; Gustavsson , René; Salas , Josep M.; Niesing , Hugo; Van Der Velde , J.; Tena , Llani; Bliek , Frits; Eijgelaar , Marcel

    2010-01-01

    International audience; The European project INTEGRAL aims to build and demonstrate an industry-quality reference solution for DER aggregation-level control and coordination, based on commonly available ICT components, standards, and platforms. To achieve this, the Integrated ICT-platform based Distributed Control (IIDC) is introduced. The project includes also three field test site installations in the Netherlands, Spain and France, covering normal, critical and emergency grid conditions.

  7. Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring.

    Science.gov (United States)

    Gharavi, Hamid; Hu, Bin

    2017-01-01

    With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network.

  8. COOPEUS - connecting research infrastructures in environmental sciences

    Science.gov (United States)

    Koop-Jakobsen, Ketil; Waldmann, Christoph; Huber, Robert

    2015-04-01

    , the first steps were taken to implement the GCI as a platform for documenting the capabilities of the COOPEUS research infrastructures. COOPEUS recognizes the potential for the GCI to become an important platform promoting cross-disciplinary approaches in the studies of multifaceted environmental challenges. Recommendations from the workshop participants also revealed that in order to attract research infrastructures to use the GCI, the registration process must be simplified and accelerated. However, also the data policies of the individual research infrastructure, or lack thereof, can prevent the use of the GCI or other portals, due to unclarities regarding data management authority and data ownership. COOPEUS shall continue to promote cross-disciplinary data exchange in the environmental field and will in the future expand to also include other geographical areas.

  9. Recovery Act-SmartGrid regional demonstration transmission and distribution (T&D) Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Hedges, Edward T. [Kansas City Power & Light Company, Kansas City, MO (United States)

    2015-01-31

    This document represents the Final Technical Report for the Kansas City Power & Light Company (KCP&L) Green Impact Zone SmartGrid Demonstration Project (SGDP). The KCP&L project is partially funded by Department of Energy (DOE) Regional Smart Grid Demonstration Project cooperative agreement DE-OE0000221 in the Transmission and Distribution Infrastructure application area. This Final Technical Report summarizes the KCP&L SGDP as of April 30, 2015 and includes summaries of the project design, implementation, operations, and analysis performed as of that date.

  10. The EGEE user support infrastructure

    CERN Document Server

    Antoni, Torsten

    2008-01-01

    Grid user support is a challenging task due to the distributed nature of the Grid. The variety of users and Virtual Organisations adds further to the challenge. Support requests come from Grid beginners, from users with specific applications, from site administrators, or from Grid monitoring operators. With the GGUS infrastructure, EGEE provides a portal where users can find support in their daily use of the Grid. The current use of the system shows that the goal has been achieved with success. The Grid user support model in EGEE can be captioned "regional support with central coordination". This model is realised through a support process which is clearly defined and involves all the parties that are needed to run a project-wide support service. This process is sustained by a help desk system which consists of a central platform integrated with several satellite systems belonging to the Regional Operations Centres (ROCs) and the Virtual Organisations (VOs). The central system (Global Grid User Support, GGUS)...

  11. Modeling complexity in engineered infrastructure system: Water distribution network as an example

    Science.gov (United States)

    Zeng, Fang; Li, Xiang; Li, Ke

    2017-02-01

    The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.

  12. Load Segmentation for Convergence of Distribution Automation and Advanced Metering Infrastructure Systems

    Science.gov (United States)

    Pamulaparthy, Balakrishna; KS, Swarup; Kommu, Rajagopal

    2014-12-01

    Distribution automation (DA) applications are limited to feeder level today and have zero visibility outside of the substation feeder and reaching down to the low-voltage distribution network level. This has become a major obstacle in realizing many automated functions and enhancing existing DA capabilities. Advanced metering infrastructure (AMI) systems are being widely deployed by utilities across the world creating system-wide communications access to every monitoring and service point, which collects data from smart meters and sensors in short time intervals, in response to utility needs. DA and AMI systems convergence provides unique opportunities and capabilities for distribution grid modernization with the DA system acting as a controller and AMI system acting as feedback to DA system, for which DA applications have to understand and use the AMI data selectively and effectively. In this paper, we propose a load segmentation method that helps the DA system to accurately understand and use the AMI data for various automation applications with a suitable case study on power restoration.

  13. Technology Trends in Cloud Infrastructure

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Cloud computing is growing at an exponential pace with an increasing number of workloads being hosted in mega-scale public clouds such as Microsoft Azure. Designing and operating such large infrastructures requires not only a significant capital spend for provisioning datacenters, servers, networking and operating systems, but also R&D investments to capitalize on disruptive technology trends and emerging workloads such as AI/ML. This talk will cover the various infrastructure innovations being implemented in large scale public clouds and opportunities/challenges ahead to deliver the next generation of scale computing. About the speaker Kushagra Vaid is the general manager and distinguished engineer for Hardware Infrastructure in the Microsoft Azure division. He is accountable for the architecture and design of compute and storage platforms, which are the foundation for Microsoft’s global cloud-scale services. He and his team have successfully delivered four generations of hyperscale cloud hardwar...

  14. The Role of Distribution Infrastructure and Equipment in the Life-cycle Air Emissions of Liquid Transportation Fuels

    Science.gov (United States)

    Strogen, Bret Michael

    Production of fuel ethanol in the United States has increased ten-fold since 1993, largely as a result of government programs motivated by goals to improve domestic energy security, economic development, and environmental impacts. Over the next decade, the growth of and eventually the total production of second generation cellulosic biofuels is projected to exceed first generation (e.g., corn-based) biofuels, which will require continued expansion of infrastructure for producing and distributing ethanol and perhaps other biofuels. In addition to identifying potential differences in tailpipe emissions from vehicles operating with ethanol-blended or ethanol-free gasoline, environmental comparison of ethanol to petroleum fuels requires a comprehensive accounting of life-cycle environmental effects. Hundreds of published studies evaluate the life-cycle emissions from biofuels and petroleum, but the operation and maintenance of storage, handling, and distribution infrastructure and equipment for fuels and fuel feedstocks had not been adequately addressed. Little attention has been paid to estimating and minimizing emissions from these complex systems, presumably because they are believed to contribute a small fraction of total emissions for petroleum and first generation biofuels. This research aims to quantify the environmental impacts associated with the major components of fuel distribution infrastructure, and the impacts that will be introduced by expanding the parallel infrastructure needed to accommodate more biofuels in our existing systems. First, the components used in handling, storing, and transporting feedstocks and fuels are physically characterized by typical operating throughput, utilization, and lifespan. US-specific life-cycle GHG emission and water withdrawal factors are developed for each major distribution chain activity by applying a hybrid life-cycle assessment methodology to the manufacturing, construction, maintenance and operation of each

  15. N2R vs. DR Network Infrastructure Evaluation

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Roost, Lars Jessen; Toft, Per Nesager

    2007-01-01

    Recent development of Internet-based services has set higher requirements to network infrastructures in terms of more bandwidth, lower delays and more reliability. Theoretical research within the area of Structural Quality of Service (SQoS) has introduced a new type of infrastructure which meet...... these requirements: N2R infrastructures. This paper contributes to the ongoing research with a case study from North Jutland. An evaluation of three N2R infrastructures compared to a Double Ring (DR) infrastructure will provide valuable information of the practical applicability of N2R infrastructures. In order...... to study if N2R infrastructures perform better than the DR infrastructure, a distribution network was established based on geographical information system (GIS) data. Nodes were placed with respect to demographic and geographical factors. The established distribution network was investigated with respect...

  16. Towards trustworthy health platform cloud

    NARCIS (Netherlands)

    Deng, M.; Nalin, M.; Petkovic, M.; Baroni, I.; Marco, A.; Jonker, W.; Petkovic, M.

    2012-01-01

    To address today’s major concerns of health service providers regarding security, resilience and data protection when moving on the cloud, we propose an approach to build a trustworthy healthcare platform cloud, based on a trustworthy cloud infrastructure. This paper first highlights the main

  17. Citizen Sensors for SHM: Towards a Crowdsourcing Platform

    Science.gov (United States)

    Ozer, Ekin; Feng, Maria Q.; Feng, Dongming

    2015-01-01

    This paper presents an innovative structural health monitoring (SHM) platform in terms of how it integrates smartphone sensors, the web, and crowdsourcing. The ubiquity of smartphones has provided an opportunity to create low-cost sensor networks for SHM. Crowdsourcing has given rise to citizen initiatives becoming a vast source of inexpensive, valuable but heterogeneous data. Previously, the authors have investigated the reliability of smartphone accelerometers for vibration-based SHM. This paper takes a step further to integrate mobile sensing and web-based computing for a prospective crowdsourcing-based SHM platform. An iOS application was developed to enable citizens to measure structural vibration and upload the data to a server with smartphones. A web-based platform was developed to collect and process the data automatically and store the processed data, such as modal properties of the structure, for long-term SHM purposes. Finally, the integrated mobile and web-based platforms were tested to collect the low-amplitude ambient vibration data of a bridge structure. Possible sources of uncertainties related to citizens were investigated, including the phone location, coupling conditions, and sampling duration. The field test results showed that the vibration data acquired by smartphones operated by citizens without expertise are useful for identifying structural modal properties with high accuracy. This platform can be further developed into an automated, smart, sustainable, cost-free system for long-term monitoring of structural integrity of spatially distributed urban infrastructure. Citizen Sensors for SHM will be a novel participatory sensing platform in the way that it offers hybrid solutions to transitional crowdsourcing parameters. PMID:26102490

  18. Citizen Sensors for SHM: Towards a Crowdsourcing Platform

    Directory of Open Access Journals (Sweden)

    Ekin Ozer

    2015-06-01

    Full Text Available This paper presents an innovative structural health monitoring (SHM platform in terms of how it integrates smartphone sensors, the web, and crowdsourcing. The ubiquity of smartphones has provided an opportunity to create low-cost sensor networks for SHM. Crowdsourcing has given rise to citizen initiatives becoming a vast source of inexpensive, valuable but heterogeneous data. Previously, the authors have investigated the reliability of smartphone accelerometers for vibration-based SHM. This paper takes a step further to integrate mobile sensing and web-based computing for a prospective crowdsourcing-based SHM platform. An iOS application was developed to enable citizens to measure structural vibration and upload the data to a server with smartphones. A web-based platform was developed to collect and process the data automatically and store the processed data, such as modal properties of the structure, for long-term SHM purposes. Finally, the integrated mobile and web-based platforms were tested to collect the low-amplitude ambient vibration data of a bridge structure. Possible sources of uncertainties related to citizens were investigated, including the phone location, coupling conditions, and sampling duration. The field test results showed that the vibration data acquired by smartphones operated by citizens without expertise are useful for identifying structural modal properties with high accuracy. This platform can be further developed into an automated, smart, sustainable, cost-free system for long-term monitoring of structural integrity of spatially distributed urban infrastructure. Citizen Sensors for SHM will be a novel participatory sensing platform in the way that it offers hybrid solutions to transitional crowdsourcing parameters.

  19. Infrastructure for the design and fabrication of MEMS for RF/microwave and millimeter wave applications

    Science.gov (United States)

    Nerguizian, Vahe; Rafaf, Mustapha

    2004-08-01

    This article describes and provides valuable information for companies and universities with strategies to start fabricating MEMS for RF/Microwave and millimeter wave applications. The present work shows the infrastructure developed for RF/Microwave and millimeter wave MEMS platforms, which helps the identification, evaluation and selection of design tools and fabrication foundries taking into account packaging and testing. The selected and implemented simple infrastructure models, based on surface and bulk micromachining, yield inexpensive and innovative approaches for distributed choices of MEMS operating tools. With different educational or industrial institution needs, these models may be modified for specific resource changes using a careful analyzed iteration process. The inputs of the project are evaluation selection criteria and information sources such as financial, technical, availability, accessibility, simplicity, versatility and practical considerations. The outputs of the project are the selection of different MEMS design tools or software (solid modeling, electrostatic/electromagnetic and others, compatible with existing standard RF/Microwave design tools) and different MEMS manufacturing foundries. Typical RF/Microwave and millimeter wave MEMS solutions are introduced on the platform during the evaluation and development phases of the project for the validation of realistic results and operational decision making choices. The encountered challenges during the investigation and the development steps are identified and the dynamic behavior of the infrastructure is emphasized. The inputs (resources) and the outputs (demonstrated solutions) are presented in tables and flow chart mode diagrams.

  20. The web as platform: Data flows in social media

    NARCIS (Netherlands)

    Helmond, A.

    2015-01-01

    This dissertation looks into the history of Web 2.0 as "the web as platform" (O’Reilly 2004) and traces the transition of social network sites into social media platforms to examine how social media has transformed the web. In order to understand this process from an infrastructural perspective, I

  1. Cloud Computing as Evolution of Distributed Computing – A Case Study for SlapOS Distributed Cloud Computing Platform

    Directory of Open Access Journals (Sweden)

    George SUCIU

    2013-01-01

    Full Text Available The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications.

  2. ADMS Evaluation Platform

    Energy Technology Data Exchange (ETDEWEB)

    2018-01-23

    Deploying an ADMS or looking to optimize its value? NREL offers a low-cost, low-risk evaluation platform for assessing ADMS performance. The National Renewable Energy Laboratory (NREL) has developed a vendor-neutral advanced distribution management system (ADMS) evaluation platform and is expanding its capabilities. The platform uses actual grid-scale hardware, large-scale distribution system models, and advanced visualization to simulate realworld conditions for the most accurate ADMS evaluation and experimentation.

  3. Proba-V Mission Exploitation Platform

    Science.gov (United States)

    Goor, E.

    2017-12-01

    VITO and partners developed the Proba-V Mission Exploitation Platform (MEP) as an end-to-end solution to drastically improve the exploitation of the Proba-V (an EC Copernicus contributing mission) EO-data archive, the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers (e.g. the EC Copernicus Global Land Service) and end-users. The analysis of time series of data (PB range) is addressed, as well as the large scale on-demand processing of near real-time data on a powerful and scalable processing environment. New features are still developed, but the platform is yet fully operational since November 2016 and offers A time series viewer (browser web client and API), showing the evolution of Proba-V bands and derived vegetation parameters for any country, region, pixel or polygon defined by the user. Full-resolution viewing services for the complete data archive. On-demand processing chains on a powerfull Hadoop/Spark backend. Virtual Machines can be requested by users with access to the complete data archive mentioned above and pre-configured tools to work with this data, e.g. various toolboxes and support for R and Python. This allows users to immediately work with the data without having to install tools or download data, but as well to design, debug and test applications on the platform. Jupyter Notebooks is available with some examples python and R projects worked out to show the potential of the data. Today the platform is already used by several international third party projects to perform R&D activities on the data, and to develop/host data analysis toolboxes. From the Proba-V MEP, access to other data sources such as Sentinel-2 and landsat data is also addressed. Selected components of the MEP are also deployed on public cloud infrastructures in various R&D projects. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to

  4. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    Energy Technology Data Exchange (ETDEWEB)

    Segre, Daniel [Boston Univ., MA (United States)

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  5. Cloud Based Earth Observation Data Exploitation Platforms

    Science.gov (United States)

    Romeo, A.; Pinto, S.; Loekken, S.; Marin, A.

    2017-12-01

    and the Amazon Web Services cloud. This work will present an overview of the TEPs and the multi-cloud EO data processing platform, and discuss their main achievements and their impacts in the context of distributed Research Infrastructures such as EPOS and EOSC.

  6. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    Science.gov (United States)

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  7. Virtual health platform for medical tourism purposes.

    Science.gov (United States)

    Martinez, Debora; Ferriol, Pedro; Tous, Xisco; Cabrer, Miguel; Prats, Mercedes

    2008-01-01

    This paper introduces an overview of the Virtual Health Platform (VHP), an alternative approach to create a functional PHR system in a medical tourism environment. The proposed platform has been designed in order to be integrated with EHR infrastructures and in this way it expects to be useful and more advantageous to the patient or tourist. Use cases of the VHP and its potential benefits summarize the analysis.

  8. DSF: A Common Platform for Distributed Systems Research and Development

    Science.gov (United States)

    Tang, Chunqiang

    This paper presents Distributed Systems Foundation (DSF), a common platform for distributed systems research and development. It can run a distributed algorithm written in Java under multiple execution modes—simulation, massive multi-tenancy, and real deployment. DSF provides a set of novel features to facilitate testing and debugging, including chaotic timing test and time travel debugging with mutable replay. Unlike existing research prototypes that offer advanced debugging features by hacking programming tools, DSF is written entirely in Java, without modifications to any external tools such as JVM, Java runtime library, compiler, linker, system library, OS, or hypervisor. This simplicity stems from our goal of making DSF not only a research prototype but more importantly a production tool. Experiments show that DSF is efficient and easy to use. DSF's massive multi-tenancy mode can run 4,000 OS-level threads in a single JVM to concurrently execute (as opposed to simulate) 1,000 DHT nodes in real-time.

  9. Platform capitalism: The intermediation and capitalization of digital economic circulation

    Directory of Open Access Journals (Sweden)

    Paul Langley

    2017-10-01

    Full Text Available A new form of digital economic circulation has emerged, wherein ideas, knowledge, labour and use rights for otherwise idle assets move between geographically distributed but connected and interactive online communities. Such circulation is apparent across a number of digital economic ecologies, including social media, online marketplaces, crowdsourcing, crowdfunding and other manifestations of the so-called ‘sharing economy’. Prevailing accounts deploy concepts such as ‘co-production’, ‘prosumption’ and ‘peer-to-peer’ to explain digital economic circulation as networked exchange relations characterised by their disintermediated, collaborative and democratising qualities. Building from the neologism of platform capitalism, we place ‘the platform’ – understood as a distinct mode of socio-technical intermediary and business arrangement that is incorporated into wider processes of capitalisation – at the centre of the critical analysis of digital economic circulation. To create multi-sided markets and coordinate network effects, platforms enrol users through a participatory economic culture and mobilise code and data analytics to compose immanent infrastructures. Platform intermediation is also nested in the ex-post construction of a replicable business model. Prioritising rapid up-scaling and extracting revenues from circulations and associated data trails, the model performs the structure of venture capital investment which capitalises on the potential of platforms to realise monopoly rents.

  10. Preparation of CAD infrastructure for ITER procurement packages allocated to Korea

    International Nuclear Information System (INIS)

    Kim, G.H.; Hwang, H.S.; Choi, J.H.; Lee, H.G.; Thomas, E.; Redon, F.; Wilhelm, B.

    2013-01-01

    Highlights: ► It is necessary to use the same CAD platform between project partners for efficient design collaboration. ► Several unexpected problems were found during preparation of the CAD infrastructure. ► The problems have resulted in a waste of time and cost. ► The approach using the same configurations is effective to avoid IT-related problems. ► The design activities are steadily being performed on the prepared ITER CAD platform. -- Abstract: It is necessary to use the same CAD platform for standardization and efficient design collaboration between project partners. ITER has selected CATIA with ENOVIA as the primary CAD and integration tool. During the preparation of the CAD infrastructure, there were several difficulties with respect to information technology (IT). ITER design is classified as mechanical and plant. The procurement arrangement is divided into three types; functional specification, detailed design, and build to print. Therefore, it is important to prepare the suitable prerequisites according to the design type, and to comply with CAD methodologies to avoid trial and error. This paper presents how to overcome the difficulties and how to perform the CAD activities for ITER Korea procurement packages including important matters on a CAD infrastructure in a big project

  11. Distribution of green infrastructure along walkable roads

    Science.gov (United States)

    Low-income and minority neighborhoods frequently lack healthful resources to which wealthier communities have access. Though important, the addition of facilities such as recreation centers can be costly and take time to implement. Urban green infrastructure, such as street trees...

  12. System Architecture Development for Energy and Water Infrastructure Data Management and Geovisual Analytics

    Science.gov (United States)

    Berres, A.; Karthik, R.; Nugent, P.; Sorokine, A.; Myers, A.; Pang, H.

    2017-12-01

    Building an integrated data infrastructure that can meet the needs of a sustainable energy-water resource management requires a robust data management and geovisual analytics platform, capable of cross-domain scientific discovery and knowledge generation. Such a platform can facilitate the investigation of diverse complex research and policy questions for emerging priorities in Energy-Water Nexus (EWN) science areas. Using advanced data analytics, machine learning techniques, multi-dimensional statistical tools, and interactive geovisualization components, such a multi-layered federated platform is being developed, the Energy-Water Nexus Knowledge Discovery Framework (EWN-KDF). This platform utilizes several enterprise-grade software design concepts and standards such as extensible service-oriented architecture, open standard protocols, event-driven programming model, enterprise service bus, and adaptive user interfaces to provide a strategic value to the integrative computational and data infrastructure. EWN-KDF is built on the Compute and Data Environment for Science (CADES) environment in Oak Ridge National Laboratory (ORNL).

  13. Transport Infrastructure Surveillance and Monitoring by Electromagnetic Sensing: The ISTIMES Project

    Directory of Open Access Journals (Sweden)

    Marie Bost

    2010-11-01

    Full Text Available The ISTIMES project, funded by the European Commission in the frame of a joint Call “ICT and Security” of the Seventh Framework Programme, is presented and preliminary research results are discussed. The main objective of the ISTIMES project is to design, assess and promote an Information and Communication Technologies (ICT-based system, exploiting distributed and local sensors, for non-destructive electromagnetic monitoring of critical transport infrastructures. The integration of electromagnetic technologies with new ICT information and telecommunications systems enables remotely controlled monitoring and surveillance and real time data imaging of the critical transport infrastructures. The project exploits different non-invasive imaging technologies based on electromagnetic sensing (optic fiber sensors, Synthetic Aperture Radar satellite platform based, hyperspectral spectroscopy, Infrared thermography, Ground Penetrating Radar-, low-frequency geophysical techniques, Ground based systems for displacement monitoring. In this paper, we show the preliminary results arising from the GPR and infrared thermographic measurements carried out on the Musmeci bridge in Potenza, located in a highly seismic area of the Apennine chain (Southern Italy and representing one of the test beds of the project.

  14. The Osseus platform: a prototype for advanced web-based distributed simulation

    Science.gov (United States)

    Franceschini, Derrick; Riecken, Mark

    2016-05-01

    Recent technological advances in web-based distributed computing and database technology have made possible a deeper and more transparent integration of some modeling and simulation applications. Despite these advances towards true integration of capabilities, disparate systems, architectures, and protocols will remain in the inventory for some time to come. These disparities present interoperability challenges for distributed modeling and simulation whether the application is training, experimentation, or analysis. Traditional approaches call for building gateways to bridge between disparate protocols and retaining interoperability specialists. Challenges in reconciling data models also persist. These challenges and their traditional mitigation approaches directly contribute to higher costs, schedule delays, and frustration for the end users. Osseus is a prototype software platform originally funded as a research project by the Defense Modeling & Simulation Coordination Office (DMSCO) to examine interoperability alternatives using modern, web-based technology and taking inspiration from the commercial sector. Osseus provides tools and services for nonexpert users to connect simulations, targeting the time and skillset needed to successfully connect disparate systems. The Osseus platform presents a web services interface to allow simulation applications to exchange data using modern techniques efficiently over Local or Wide Area Networks. Further, it provides Service Oriented Architecture capabilities such that finer granularity components such as individual models can contribute to simulation with minimal effort.

  15. NHERI: Advancing the Research Infrastructure of the Multi-Hazard Community

    Science.gov (United States)

    Blain, C. A.; Ramirez, J. A.; Bobet, A.; Browning, J.; Edge, B.; Holmes, W.; Johnson, D.; Robertson, I.; Smith, T.; Zuo, D.

    2017-12-01

    The Natural Hazards Engineering Research Infrastructure (NHERI), supported by the National Science Foundation (NSF), is a distributed, multi-user national facility that provides the natural hazards research community with access to an advanced research infrastructure. Components of NHERI are comprised of a Network Coordination Office (NCO), a cloud-based cyberinfrastructure (DesignSafe-CI), a computational modeling and simulation center (SimCenter), and eight Experimental Facilities (EFs), including a post-disaster, rapid response research facility (RAPID). Utimately NHERI enables researchers to explore and test ground-breaking concepts to protect homes, businesses and infrastructure lifelines from earthquakes, windstorms, tsunamis, and surge enabling innovations to help prevent natural hazards from becoming societal disasters. When coupled with education and community outreach, NHERI will facilitate research and educational advances that contribute knowledge and innovation toward improving the resiliency of the nation's civil infrastructure to withstand natural hazards. The unique capabilities and coordinating activities over Year 1 between NHERI's DesignSafe-CI, the SimCenter, and individual EFs will be presented. Basic descriptions of each component are also found at https://www.designsafe-ci.org/facilities/. Additionally to be discussed are the various roles of the NCO in leading development of a 5-year multi-hazard science plan, coordinating facility scheduling and fostering the sharing of technical knowledge and best practices, leading education and outreach programs such as the recent Summer Institute and multi-facility REU program, ensuring a platform for technology transfer to practicing engineers, and developing strategic national and international partnerships to support a diverse multi-hazard research and user community.

  16. Model Infrastruktur dan Manajemen Platform Server Berbasis Cloud Computing

    Directory of Open Access Journals (Sweden)

    Mulki Indana Zulfa

    2017-11-01

    Full Text Available Cloud computing is a new technology that is still very rapidly growing. This technology makes the Internet as the main media for the management of data and applications remotely. Cloud computing allows users to run an application without having to think about infrastructure and its platforms. Other technical aspects such as memory, storage, backup and restore, can be done very easily. This research is intended to modeling the infrastructure and management of computer platform in computer network of Faculty of Engineering, University of Jenderal Soedirman. The first stage in this research is literature study, by finding out the implementation model in previous research. Then the result will be combined with a new approach to existing resources and try to implement directly on the existing server network. The results showed that the implementation of cloud computing technology is able to replace the existing platform network.

  17. Spatial Data Infrastructures (SDIs): Improving the Scientific Environmental Data Management and Visualization with ArcGIS Platform

    Science.gov (United States)

    Shrestha, S. R.; Hogeweg, M.; Rose, B.; Turner, A.

    2017-12-01

    The requirement for quality, authoritatively sourced data can often be challenging when working with scientific data. In addition, the lack of standard mechanism to discover, access, and use such data can be cumbersome. This results in slow research, poor dissemination and missed opportunities for research to positively impact policy and knowledge. There is widespread recognition that authoritative datasets are maintained by multiple organizations following various standards, and addressing these challenges will involve multiple stakeholders as well. The bottom line is that organizations need a mechanism to efficiently create, share, catalog, and discover data, and the ability to apply these to create an authoritative information products and applications is powerful and provides value. In real-world applications, individual organizations develop, modify, finalize, and support foundational data for distributed users across the system and thus require an efficient method of data management. For this, the SDI (Spatial Data Infrastructure) framework can be applied for GIS users to facilitate efficient and powerful decision making based on strong visualization and analytics. Working with research institutions, governments, and organizations across the world, we have developed a Hub framework for data and analysis sharing that is driven by outcome-centric goals which apply common methodologies and standards. SDI are an operational capability that should be equitably accessible to policy-makers, analysts, departments and public communities. These SDI need to align with operational workflows and support social communications and collaboration. The Hub framework integrates data across agencies, projects and organizations to support interoperability and drive coordination. We will present and share how Esri has been supporting the development of local, state, and national SDIs for many years and show some use cases for applications of planetary SDI. We will also share what

  18. Next generation ATCA control infrastructure for the CMS Phase-2 upgrades

    CERN Document Server

    Smith, Wesley; Svetek, Aleš; Tikalsky, Jes; Fobes, Robert; Dasu, Sridhara; Smith, Wesley; Vicente, Marcelo

    2017-01-01

    A next generation control infrastructure to be used in Advanced TCA (ATCA) blades at CMS experiment is being designed and tested. Several ATCA systems are being prepared for the High-Luminosity LHC (HL-LHC) and will be installed at CMS during technical stops. The next generation control infrastructure will provide all the necessary hardware, firmware and software required in these systems, decreasing development time and increasing flexibility. The complete infrastructure includes an Intelligent Platform Management Controller (IPMC), a Module Management Controller (MMC) and an Embedded Linux Mezzanine (ELM) processing card.

  19. Adding temporal data enhancements to the advanced spatial data infrastructure platform

    CSIR Research Space (South Africa)

    Sibolla, B

    2014-10-01

    Full Text Available Users of Spatial Data Infrastructure (SDI) increasingly require provision to data holdings beyond the traditional static raster, map or vector based data sets within their organisations. The modern GIS practitioner and Spatial Data Scientist...

  20. Spatial policy, planning and infrastructure investment: Lessons from ...

    African Journals Online (AJOL)

    More evidence, and better evidence, an understanding of spatial trends and the underlying forces that shape them, are needed to support planning and infrastructure investment. Urban simulation platforms offer valuable tools in this regard. Findings of simulation work in three metropolitan areas (eThekwini, Nelson Mandela ...

  1. INFRASTRUCTURE

    CERN Multimedia

    A. Gaddi and P. Tropea

    2011-01-01

    Most of the work relating to Infrastructure has been concentrated in the new CSC and RPC manufactory at building 904, on the Prevessin site. Brand new gas distribution, powering and HVAC infrastructures are being deployed and the production of the first CSC chambers has started. Other activities at the CMS site concern the installation of a new small crane bridge in the Cooling technical room in USC55, in order to facilitate the intervention of the maintenance team in case of major failures of the chilled water pumping units. The laser barrack in USC55 has been also the object of a study, requested by the ECAL community, for the new laser system that shall be delivered in few months. In addition, ordinary maintenance works have been performed during the short machine stops on all the main infrastructures at Point 5 and in preparation to the Year-End Technical Stop (YETS), when most of the systems will be carefully inspected in order to ensure a smooth running through the crucial year 2012. After the incide...

  2. Designing for Co-located Social Media Use in the Home - Using the CASOME Infrastructure

    DEFF Research Database (Denmark)

    Petersen, Marianne Graves; Ludvigsen, Martin; Grønbæk, Kaj

    2007-01-01

    A range of research has pointed to empirical studies of the use of domestic materials as a useful insight when designing future interactive systems for homes. In this paper we describe how we designed a system from the basis of lessons from such studies. Our system applies the CASOME infrastructure...... (context-aware interactive media platform for social computing in the home) to construct a system supporting distributed and collaborative handling of digital materials in a domestic context. It contains a collective platform for handling digital materials in the home and also contains a range of connected...... interactive surfaces supporting the flow of digital materials around the physical home. We discuss applications and use scenarios of the system, and finally, we present experiences from lab and field tests of the system. The main contribution of the paper is that it illustrates how insights from empirical...

  3. Towards a renewal of transmission & distribution infrastructures to meet EU 2020 goals

    Energy Technology Data Exchange (ETDEWEB)

    Monizza, Giuliano; Delfino, F.; Denegri, G.B.; Invernizzi, M.; Pampararo, F.; Amann, G.; Bessede, J.-L.; Luxa, A.

    2010-09-15

    This paper is the outcome of a collaboration between T&D Europe - The European Association of the Electricity Transmission and Distribution Equipment and Services Industry and the Electrical Engineering Department of the University of Genoa. It presents a scientific analysis of how much modern products and systems from the electric industry contribute to the European Union's efforts to mitigate climate change. A methodology is proposed in order to quantify the environmental benefits in terms of efficiency increase, CO2 reduction and a wider employ of renewable energy resources and of power quality improvement provided by a renewal of the T&D infrastructures.

  4. Smart Cities Intelligence System (SMACiSYS) Integrating Sensor Web with Spatial Data Infrastructures (sensdi)

    Science.gov (United States)

    Bhattacharya, D.; Painho, M.

    2017-09-01

    The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC) keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.

  5. SMART CITIES INTELLIGENCE SYSTEM (SMACiSYS INTEGRATING SENSOR WEB WITH SPATIAL DATA INFRASTRUCTURES (SENSDI

    Directory of Open Access Journals (Sweden)

    D. Bhattacharya

    2017-09-01

    Full Text Available The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS with sensor-web access (SENSDI utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.

  6. METHODS OF MANAGING TRAFFIC DISTRIBUTION IN INFORMATION AND COMMUNICATION NETWORKS OF CRITICAL INFRASTRUCTURE SYSTEMS

    OpenAIRE

    Kosenko, Viktor; Persiyanova, Elena; Belotskyy, Oleksiy; Malyeyeva, Olga

    2017-01-01

    The subject matter of the article is information and communication networks (ICN) of critical infrastructure systems (CIS). The goal of the work is to create methods for managing the data flows and resources of the ICN of CIS to improve the efficiency of information processing. The following tasks were solved in the article: the data flow model of multi-level ICN structure was developed, the method of adaptive distribution of data flows was developed, the method of network resource assignment...

  7. The Czech National Grid Infrastructure

    Science.gov (United States)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  8. National software infrastructure for lattice gauge theory

    International Nuclear Information System (INIS)

    Brower, Richard C

    2005-01-01

    The current status of the SciDAC software infrastructure project for lattice gauge theory is summarized. This includes the the design of a QCD application programmers interface (API) that allows existing and future codes to be run efficiently on Terascale hardware facilities and to be rapidly ported to new dedicated or commercial platforms. The critical components of the API have been implemented and are in use on the US QCDOC hardware at BNL and on both the switched and mesh architecture Pentium 4 clusters at Fermi National Accelerator Laboratory (FNAL) and Thomas Jefferson National Accelerator Facility (JLab). Future software infrastructure requirements and research directions are also discussed

  9. Educational process in modern climatology within the web-GIS platform "Climate"

    Science.gov (United States)

    Gordova, Yulia; Gorbatenko, Valentina; Gordov, Evgeny; Martynova, Yulia; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara

    2013-04-01

    These days, common to all scientific fields the problem of training of scientists in the environmental sciences is exacerbated by the need to develop new computational and information technology skills in distributed multi-disciplinary teams. To address this and other pressing problems of Earth system sciences, software infrastructure for information support of integrated research in the geosciences was created based on modern information and computational technologies and a software and hardware platform "Climate» (http://climate.scert.ru/) was developed. In addition to the direct analysis of geophysical data archives, the platform is aimed at teaching the basics of the study of changes in regional climate. The educational component of the platform includes a series of lectures on climate, environmental and meteorological modeling and laboratory work cycles on the basics of analysis of current and potential future regional climate change using Siberia territory as an example. The educational process within the Platform is implemented using the distance learning system Moodle (www.moodle.org). This work is partially supported by the Ministry of education and science of the Russian Federation (contract #8345), SB RAS project VIII.80.2.1, RFBR grant #11-05-01190a, and integrated project SB RAS #131.

  10. Pi-Sat: A Low Cost Small Satellite and Distributed Spacecraft Mission System Test Platform

    Science.gov (United States)

    Cudmore, Alan

    2015-01-01

    Current technology and budget trends indicate a shift in satellite architectures from large, expensive single satellite missions, to small, low cost distributed spacecraft missions. At the center of this shift is the SmallSatCubesat architecture. The primary goal of the Pi-Sat project is to create a low cost, and easy to use Distributed Spacecraft Mission (DSM) test bed to facilitate the research and development of next-generation DSM technologies and concepts. This test bed also serves as a realistic software development platform for Small Satellite and Cubesat architectures. The Pi-Sat is based on the popular $35 Raspberry Pi single board computer featuring a 700Mhz ARM processor, 512MB of RAM, a flash memory card, and a wealth of IO options. The Raspberry Pi runs the Linux operating system and can easily run Code 582s Core Flight System flight software architecture. The low cost and high availability of the Raspberry Pi make it an ideal platform for a Distributed Spacecraft Mission and Cubesat software development. The Pi-Sat models currently include a Pi-Sat 1U Cube, a Pi-Sat Wireless Node, and a Pi-Sat Cubesat processor card.The Pi-Sat project takes advantage of many popular trends in the Maker community including low cost electronics, 3d printing, and rapid prototyping in order to provide a realistic platform for flight software testing, training, and technology development. The Pi-Sat has also provided fantastic hands on training opportunities for NASA summer interns and Pathways students.

  11. Distributed Fracturing Affecting the Isolated Carbonate Platforms, the Latemar Platform (Dolomites, North Italy).

    NARCIS (Netherlands)

    Boro, H.; Bertotti, G.V.; Hardebol, N.J.

    2012-01-01

    Isolated carbonate platforms are highly heterogeneous bodies and are typically composed of laterally juxtaposed first order domains with different sedimentological composition and organization, i.e. a well-stratified platform interior, a massive margin and a slope with steeply dipping and poorly

  12. A novel infrastructure modularity index for the segmentation of water distribution networks

    Science.gov (United States)

    Giustolisi, O.; Ridolfi, L.

    2014-10-01

    The search for suitable segmentations is a challenging and urgent issue for the analysis, planning and management of complex water distribution networks (WDNs). In fact, complex and large size hydraulic systems require the division into modules in order to simplify the analysis and the management tasks. In the complex network theory, modularity index has been proposed as a measure of the strength of the network division into modules and its maximization is used in order to identify community of nodes (i.e., modules) which are characterized by strong interconnections. Nevertheless, modularity index needs to be revised considering the specificity of the hydraulic systems as infrastructure systems. To this aim, the classic modularity index has been recently modified and tailored for WDNs. Nevertheless, the WDN-oriented modularity is affected by the resolution limit stemming from classic modularity index. Such a limit hampers the identification/design of small modules and this is a major drawback for technical tasks requiring a detailed resolution of the network segmentation. In order to get over this problem, we propose a novel infrastructure modularity index that is not affected by the resolution limit of the classic one. The rationale and good features of the proposed index are theoretically demonstrated and discussed using two real hydraulic networks.

  13. E-Infrastructure Concertation Meeting

    CERN Multimedia

    Katarina Anthony

    2010-01-01

    The 8th e-Infrastructure Concertation Meeting was held in the Globe from 4 to 5 November to discuss the development of Europe’s distributed computing and storage resources.   Project leaders attend the E-Concertation Meeting at the Globe on 5 November 2010. © Corentin Chevalier E-Infrastructures have become an indispensable tool for scientific research, linking researchers to virtually unlimited e-resources like the grid. The recent e-Infrastructure Concertation Meeting brought together e-Science project leaders to discuss the development of this tool in the European context. The meeting was part of an ongoing initiative to develop a world-class e-infrastructure resource that would establish European leadership in e-Science. The e-Infrastructure Concertation Meeting was organised by the Commission Services (EC) with the support of e-ScienceTalk. “The Concertation meeting at CERN has been a great opportunity for e-ScienceTalk to meet many of the 38 new proje...

  14. Software Engineering Infrastructure in a Large Virtual Campus

    Science.gov (United States)

    Cristobal, Jesus; Merino, Jorge; Navarro, Antonio; Peralta, Miguel; Roldan, Yolanda; Silveira, Rosa Maria

    2011-01-01

    Purpose: The design, construction and deployment of a large virtual campus are a complex issue. Present virtual campuses are made of several software applications that complement e-learning platforms. In order to develop and maintain such virtual campuses, a complex software engineering infrastructure is needed. This paper aims to analyse the…

  15. Science gateways for distributed computing infrastructures development framework and exploitation by scientific user communities

    CERN Document Server

    Kacsuk, Péter

    2014-01-01

    The book describes the science gateway building technology developed in the SCI-BUS European project and its adoption and customization method, by which user communities, such as biologists, chemists, and astrophysicists, can build customized, domain-specific science gateways. Many aspects of the core technology are explained in detail, including its workflow capability, job submission mechanism to various grids and clouds, and its data transfer mechanisms among several distributed infrastructures. The book will be useful for scientific researchers and IT professionals engaged in the develop

  16. The Fermilab data storage infrastructure

    International Nuclear Information System (INIS)

    Jon A Bakken et al.

    2003-01-01

    Fermilab, in collaboration with the DESY laboratory in Hamburg, Germany, has created a petabyte scale data storage infrastructure to meet the requirements of experiments to store and access large data sets. The Fermilab data storage infrastructure consists of the following major storage and data transfer components: Enstore mass storage system, DCache distributed data cache, ftp and Grid ftp for primarily external data transfers. This infrastructure provides a data throughput sufficient for transferring data from experiments' data acquisition systems. It also allows access to data in the Grid framework

  17. Incorporating Human Movement Behavior into the Analysis of Spatially Distributed Infrastructure.

    Directory of Open Access Journals (Sweden)

    Lihua Wu

    Full Text Available For the first time in human history, the majority of the world's population resides in urban areas. Therefore, city managers are faced with new challenges related to the efficiency, equity and quality of the supply of resources, such as water, food and energy. Infrastructure in a city can be viewed as service points providing resources. These service points function together as a spatially collaborative system to serve an increasing population. To study the spatial collaboration among service points, we propose a shared network according to human's collective movement and resource usage based on data usage detail records (UDRs from the cellular network in a city in western China. This network is shown to be not scale-free, but exhibits an interesting triangular property governed by two types of nodes with very different link patterns. Surprisingly, this feature is consistent with the urban-rural dualistic context of the city. Another feature of the shared network is that it consists of several spatially separated communities that characterize local people's active zones but do not completely overlap with administrative areas. According to these features, we propose the incorporation of human movement into infrastructure classification. The presence of well-defined spatially separated clusters confirms the effectiveness of this approach. In this paper, our findings reveal the spatial structure inside a city, and the proposed approach provides a new perspective on integrating human movement into the study of a spatially distributed system.

  18. Incorporating Human Movement Behavior into the Analysis of Spatially Distributed Infrastructure.

    Science.gov (United States)

    Wu, Lihua; Leung, Henry; Jiang, Hao; Zheng, Hong; Ma, Li

    2016-01-01

    For the first time in human history, the majority of the world's population resides in urban areas. Therefore, city managers are faced with new challenges related to the efficiency, equity and quality of the supply of resources, such as water, food and energy. Infrastructure in a city can be viewed as service points providing resources. These service points function together as a spatially collaborative system to serve an increasing population. To study the spatial collaboration among service points, we propose a shared network according to human's collective movement and resource usage based on data usage detail records (UDRs) from the cellular network in a city in western China. This network is shown to be not scale-free, but exhibits an interesting triangular property governed by two types of nodes with very different link patterns. Surprisingly, this feature is consistent with the urban-rural dualistic context of the city. Another feature of the shared network is that it consists of several spatially separated communities that characterize local people's active zones but do not completely overlap with administrative areas. According to these features, we propose the incorporation of human movement into infrastructure classification. The presence of well-defined spatially separated clusters confirms the effectiveness of this approach. In this paper, our findings reveal the spatial structure inside a city, and the proposed approach provides a new perspective on integrating human movement into the study of a spatially distributed system.

  19. Integration Platform As Central Service Of Data Replication In Distributed Medical System

    Directory of Open Access Journals (Sweden)

    Wiesław Wajs

    2007-01-01

    Full Text Available The paper presents the application of Java Integration Platform (JIP to data replicationin the distributed medical system. After an introductory part on the medical system’s architecture,the focus shifts to a comparison of different approaches that exist with regard totransferring data between the system’s components. A description is given of the historicaldata processing and of the whole area of the JIP application to the medical system.

  20. A wireless computational platform for distributed computing based traffic monitoring involving mixed Eulerian-Lagrangian sensing

    KAUST Repository

    Jiang, Jiming

    2013-06-01

    This paper presents a new wireless platform designed for an integrated traffic monitoring system based on combined Lagrangian (mobile) and Eulerian (fixed) sensing. The sensor platform is built around a 32-bit ARM Cortex M4 micro-controller and a 2.4GHz 802.15.4 ISM compliant radio module, and can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. The platform is specially designed and optimized to be integrated in a solar-powered wireless sensor network in which traffic flow maps are computed by the nodes directly using distributed computing. A MPPT circuitry is proposed to increase the power output of the attached solar panel. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debug. An ongoing implementation is briefly discussed, and compared with existing platforms used in wireless sensor networks. © 2013 IEEE.

  1. Opening the SMS platform to users : Deliverable D7.2 - RISIS project

    NARCIS (Netherlands)

    van den Besselaar, P.A.A.; Khalili, A.; de Graaf, K.A.; Idrissou, O.A.K.; van Harmelen, Frank

    2017-01-01

    In this deliverable we describe the SMS (Semantically Mapping Science) data integration platform (http://sms.risis.eu), the technical core within the RISIS data infrastructure for Science, Technology and Innovation Studies (STI). The aim of the platform is to produce richer data to be used in social

  2. Testing Situation Awareness Network for the Electrical Power Infrastructure

    Directory of Open Access Journals (Sweden)

    Rafał Leszczyna

    2016-09-01

    Full Text Available The contemporary electrical power infrastructure is exposed to new types of threats. The cause of such threats is related to the large number of new vulnerabilities and architectural weaknesses introduced by the extensive use of Information and communication Technologies (ICT in such complex critical systems. The power grid interconnection with the Internet exposes the grid to new types of attacks, such as Advanced Persistent Threats (APT or Distributed-Denial-ofService (DDoS attacks. When addressing this situation the usual cyber security technologies are prerequisite, but not sufficient. To counter evolved and highly sophisticated threats such as the APT or DDoS, state-of-the-art technologies including Security Incident and Event Management (SIEM systems, extended Intrusion Detection/Prevention Systems (IDS/IPS and Trusted Platform Modules (TPM are required. Developing and deploying extensive ICT infrastructure that supports wide situational awareness and allows precise command and control is also necessary. In this paper the results of testing the Situational Awareness Network (SAN designed for the energy sector are presented. The purpose of the tests was to validate the selection of SAN components and check their operational capability in a complex test environment. During the tests’ execution appropriate interaction between the components was verified.

  3. The Microbial Resource Research Infrastructure MIRRI: Strength through Coordination

    Directory of Open Access Journals (Sweden)

    Erko Stackebrandt

    2015-11-01

    Full Text Available Microbial resources have been recognized as essential raw materials for the advancement of health and later for biotechnology, agriculture, food technology and for research in the life sciences, as their enormous abundance and diversity offer an unparalleled source of unexplored solutions. Microbial domain biological resource centres (mBRC provide live cultures and associated data to foster and support the development of basic and applied science in countries worldwide and especially in Europe, where the density of highly advanced mBRCs is high. The not-for-profit and distributed project MIRRI (Microbial Resource Research Infrastructure aims to coordinate access to hitherto individually managed resources by developing a pan-European platform which takes the interoperability and accessibility of resources and data to a higher level. Providing a wealth of additional information and linking to datasets such as literature, environmental data, sequences and chemistry will enable researchers to select organisms suitable for their research and enable innovative solutions to be developed. The current independent policies and managed processes will be adapted by partner mBRCs to harmonize holdings, services, training, and accession policy and to share expertise. The infrastructure will improve access to enhanced quality microorganisms in an appropriate legal framework and to resource-associated data in a more interoperable way.

  4. SeaDataNet - Pan-European infrastructure for marine and ocean data management: Unified access to distributed data sets

    Science.gov (United States)

    Schaap, D. M. A.; Maudire, G.

    2009-04-01

    SeaDataNet is an Integrated research Infrastructure Initiative (I3) in EU FP6 (2006 - 2011) to provide the data management system adapted both to the fragmented observation system and the users need for an integrated access to data, meta-data, products and services. Therefore SeaDataNet insures the long term archiving of the large number of multidisciplinary data (i.e. temperature, salinity current, sea level, chemical, physical and biological properties) collected by many different sensors installed on board of research vessels, satellite and the various platforms of the marine observing system. The SeaDataNet project started in 2006, but builds upon earlier data management infrastructure projects, undertaken over a period of 20 years by an expanding network of oceanographic data centres from the countries around all European seas. Its predecessor project Sea-Search had a strict focus on metadata. SeaDataNet maintains significant interest in the further development of the metadata infrastructure, but its primary objective is the provision of easy data access and generic data products. SeaDataNet is a distributed infrastructure that provides transnational access to marine data, meta-data, products and services through 40 interconnected Trans National Data Access Platforms (TAP) from 35 countries around the Black Sea, Mediterranean, North East Atlantic, North Sea, Baltic and Arctic regions. These include: National Oceanographic Data Centres (NODC's) Satellite Data Centres. Furthermore the SeaDataNet consortium comprises a number of expert modelling centres, SME's experts in IT, and 3 international bodies (ICES, IOC and JRC). Planning: The SeaDataNet project is delivering and operating the infrastructure in 3 versions: Version 0: maintenance and further development of the metadata systems developed by the Sea-Search project plus the development of a new metadata system for indexing and accessing to individual data objects managed by the SeaDataNet data centres. This

  5. Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility

    Science.gov (United States)

    Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer

    2009-01-01

    Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits

  6. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  7. Development of a cloud-based Bioinformatics Training Platform.

    Science.gov (United States)

    Revote, Jerico; Watson-Haigh, Nathan S; Quenette, Steve; Bethwaite, Blair; McGrath, Annette; Shang, Catherine A

    2017-05-01

    The Bioinformatics Training Platform (BTP) has been developed to provide access to the computational infrastructure required to deliver sophisticated hands-on bioinformatics training courses. The BTP is a cloud-based solution that is in active use for delivering next-generation sequencing training to Australian researchers at geographically dispersed locations. The BTP was built to provide an easy, accessible, consistent and cost-effective approach to delivering workshops at host universities and organizations with a high demand for bioinformatics training but lacking the dedicated bioinformatics training suites required. To support broad uptake of the BTP, the platform has been made compatible with multiple cloud infrastructures. The BTP is an open-source and open-access resource. To date, 20 training workshops have been delivered to over 700 trainees at over 10 venues across Australia using the BTP. © The Author 2016. Published by Oxford University Press.

  8. The Earth System Grid Federation : an Open Infrastructure for Access to Distributed Geospatial Data

    Science.gov (United States)

    Cinquini, Luca; Crichton, Daniel; Mattmann, Chris; Harney, John; Shipman, Galen; Wang, Feiyi; Ananthakrishnan, Rachana; Miller, Neill; Denvil, Sebastian; Morgan, Mark; hide

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF's architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).

  9. Effect of different types of prosthetic platforms on stress-distribution in dental implant-supported prostheses

    Energy Technology Data Exchange (ETDEWEB)

    Minatel, Lurian [Pró-Reitoria de Pesquisa e Pós-graduação (PRPPG), Universidade do Sagrado Coração, USC, 10–50 Irmã Armindal, Jardim Brasil, Bauru, 17011–160, SP (Brazil); Verri, Fellippo Ramos [Department of Dental Materials and Prosthodontics, Araçatuba Dental School, UNESP - Univ Estadual Paulista, 1193 José Bonifácio Street, Vila Mendonça, Araçatuba 16015–050 (Brazil); Kudo, Guilherme Abu Halawa [Pró-Reitoria de Pesquisa e Pós-graduação (PRPPG), Universidade do Sagrado Coração, USC, 10–50 Irmã Armindal, Jardim Brasil, Bauru, 17011–160, SP (Brazil); Faria Almeida, Daniel Augusto de; Souza Batista, Victor Eduardo de; Aparecido Araujo Lemos, Cleidiel; Piza Pellizzer, Eduardo [Department of Dental Materials and Prosthodontics, Araçatuba Dental School, UNESP - Univ Estadual Paulista, 1193 José Bonifácio Street, Vila Mendonça, Araçatuba 16015–050 (Brazil); and others

    2017-02-01

    A biomechanical analysis of different types of implant connections is relevant to clinical practice because it may impact the longevity of the rehabilitation treatment. Therefore, the objective of this study is to evaluate the Morse taper connections and the stress distribution of structures associated with the platform switching (PSW) concept. It will do this by obtaining data on the biomechanical behavior of the main structure in relation to the dental implant using the 3-dimensional finite element methodology. Four models were simulated (with each containing a single prosthesis over the implant) in the molar region, with the following specifications: M1 and M2 is an external hexagonal implant on a regular platform; M3 is an external hexagonal implant using PSW concept; and M4 is a Morse taper implant. The modeling process involved the use of images from InVesalius CT (computed tomography) processing software, which were refined using Rhinoceros 4.0 and SolidWorks 2011 CAD software. The models were then exported into the finite element program (FEMAP 11.0) to configure the meshes. The models were processed using NeiNastram software. The main results are that M1 (regular diameter 4 mm) had the highest stress concentration area and highest microstrain concentration for bone tissue, dental implants, and the retaining screw (P < 0.05). Using the PSW concept increases the area of the stress concentrations in the retaining screw (P < 0.05) more than in the regular platform implant. It was concluded that the increase in diameter is beneficial for stress distribution and that the PSW concept had higher stress concentrations in the retaining screw and the crown compared to the regular platform implant. - Highlights: • The external hexagon implants was unfavorable biomechanical. • The Morse taper implant presented the best biomechanical result. • Platform switching concept increased stress in screw-retained prostheses.

  10. Effect of different types of prosthetic platforms on stress-distribution in dental implant-supported prostheses

    International Nuclear Information System (INIS)

    Minatel, Lurian; Verri, Fellippo Ramos; Kudo, Guilherme Abu Halawa; Faria Almeida, Daniel Augusto de; Souza Batista, Victor Eduardo de; Aparecido Araujo Lemos, Cleidiel; Piza Pellizzer, Eduardo

    2017-01-01

    A biomechanical analysis of different types of implant connections is relevant to clinical practice because it may impact the longevity of the rehabilitation treatment. Therefore, the objective of this study is to evaluate the Morse taper connections and the stress distribution of structures associated with the platform switching (PSW) concept. It will do this by obtaining data on the biomechanical behavior of the main structure in relation to the dental implant using the 3-dimensional finite element methodology. Four models were simulated (with each containing a single prosthesis over the implant) in the molar region, with the following specifications: M1 and M2 is an external hexagonal implant on a regular platform; M3 is an external hexagonal implant using PSW concept; and M4 is a Morse taper implant. The modeling process involved the use of images from InVesalius CT (computed tomography) processing software, which were refined using Rhinoceros 4.0 and SolidWorks 2011 CAD software. The models were then exported into the finite element program (FEMAP 11.0) to configure the meshes. The models were processed using NeiNastram software. The main results are that M1 (regular diameter 4 mm) had the highest stress concentration area and highest microstrain concentration for bone tissue, dental implants, and the retaining screw (P < 0.05). Using the PSW concept increases the area of the stress concentrations in the retaining screw (P < 0.05) more than in the regular platform implant. It was concluded that the increase in diameter is beneficial for stress distribution and that the PSW concept had higher stress concentrations in the retaining screw and the crown compared to the regular platform implant. - Highlights: • The external hexagon implants was unfavorable biomechanical. • The Morse taper implant presented the best biomechanical result. • Platform switching concept increased stress in screw-retained prostheses.

  11. Processing-Efficient Distributed Adaptive RLS Filtering for Computationally Constrained Platforms

    Directory of Open Access Journals (Sweden)

    Noor M. Khan

    2017-01-01

    Full Text Available In this paper, a novel processing-efficient architecture of a group of inexpensive and computationally incapable small platforms is proposed for a parallely distributed adaptive signal processing (PDASP operation. The proposed architecture runs computationally expensive procedures like complex adaptive recursive least square (RLS algorithm cooperatively. The proposed PDASP architecture operates properly even if perfect time alignment among the participating platforms is not available. An RLS algorithm with the application of MIMO channel estimation is deployed on the proposed architecture. Complexity and processing time of the PDASP scheme with MIMO RLS algorithm are compared with sequentially operated MIMO RLS algorithm and liner Kalman filter. It is observed that PDASP scheme exhibits much lesser computational complexity parallely than the sequential MIMO RLS algorithm as well as Kalman filter. Moreover, the proposed architecture provides an improvement of 95.83% and 82.29% decreased processing time parallely compared to the sequentially operated Kalman filter and MIMO RLS algorithm for low doppler rate, respectively. Likewise, for high doppler rate, the proposed architecture entails an improvement of 94.12% and 77.28% decreased processing time compared to the Kalman and RLS algorithms, respectively.

  12. The Satellite Data Thematic Core Service within the EPOS Research Infrastructure

    Science.gov (United States)

    Manunta, Michele; Casu, Francesco; Zinno, Ivana; De Luca, Claudio; Buonanno, Sabatino; Zeni, Giovanni; Wright, Tim; Hooper, Andy; Diament, Michel; Ostanciaux, Emilie; Mandea, Mioara; Walter, Thomas; Maccaferri, Francesco; Fernandez, Josè; Stramondo, Salvatore; Bignami, Christian; Bally, Philippe; Pinto, Salvatore; Marin, Alessandro; Cuomo, Antonio

    2017-04-01

    EPOS, the European Plate Observing System, is a long-term plan to facilitate the integrated use of data, data products, software and services, available from distributed Research Infrastructures (RI), for solid Earth science in Europe. Indeed, EPOS integrates a large number of existing European RIs belonging to several fields of the Earth science, from seismology to geodesy, near fault and volcanic observatories as well as anthropogenic hazards. The EPOS vision is that the integration of the existing national and trans-national research infrastructures will increase access and use of the multidisciplinary data recorded by the solid Earth monitoring networks, acquired in laboratory experiments and/or produced by computational simulations. The establishment of EPOS will foster the interoperability of products and services in the Earth science field to a worldwide community of users. Accordingly, the EPOS aim is to integrate the diverse and advanced European Research Infrastructures for solid Earth science, and build on new e-science opportunities to monitor and understand the dynamic and complex solid-Earth System. One of the EPOS Thematic Core Services (TCS), referred to as Satellite Data, aims at developing, implementing and deploying advanced satellite data products and services, mainly based on Copernicus data (namely Sentinel acquisitions), for the Earth science community. This work intends to present the technological enhancements, fostered by EPOS, to deploy effective satellite services in a harmonized and integrated way. In particular, the Satellite Data TCS will deploy five services, EPOSAR, GDM, COMET, 3D-Def and MOD, which are mainly based on the exploitation of SAR data acquired by the Sentinel-1 constellation and designed to provide information on Earth surface displacements. In particular, the planned services will provide both advanced DInSAR products (deformation maps, velocity maps, deformation time series) and value-added measurements (source model

  13. Requirements for an evaluation infrastructure for reliable pervasive healthcare research

    DEFF Research Database (Denmark)

    Wagner, Stefan Rahr; Toftegaard, Thomas Skjødeberg; Bertelsen, Olav W.

    2012-01-01

    The need for a non-intrusive evaluation infrastructure platform to support research on reliable pervasive healthcare in the unsupervised setting is analyzed and challenges and possibilities are identified. A list of requirements is presented and a solution is suggested that would allow researchers...

  14. Dynamic Collaboration Infrastructure for Hydrologic Science

    Science.gov (United States)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the

  15. Distributed analysis using GANGA on the EGEE/LCG infrastructure

    International Nuclear Information System (INIS)

    Elmsheuser, J; Brochu, F; Harrison, K; Egede, U; Gaidioz, B; Liko, D; Maier, A; Moscicki, J; Muraru, A; Lee, H-C; Romanovsky, V; Soroko, A; Tan, C L

    2008-01-01

    The distributed data analysis using Grid resources is one of the fundamental applications in high energy physics to be addressed and realized before the start of LHC data taking. The need to facilitate the access to the resources is very high. In every experiment up to a thousand physicist will be submitting analysis jobs into the Grid. Appropriate user interfaces and helper applications have to be made available to assure that all users can use the Grid without too much expertise in Grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. The GANGA job management system (http://cern.ch/ganga), developed as a common project between the ATLAS and LHCb experiments provides and integrates these kind of tools. GANGA provides a simple and consistent way of preparing, organizing and executing analysis tasks within the experiment analysis framework, implemented through a plug-in system. It allows trivial switching between running test jobs on a local batch system and running large-scale analyzes on the Grid, hiding Grid technicalities. We will be reporting on the plug-ins and our experiences of distributed data analysis using GANGA within the ATLAS experiment and the EGEE/LCG infrastructure. The integration with the ATLAS data management system DQ2 into GANGA is a key functionality. In combination with the job splitting mechanism large amounts of jobs can be sent to the locations of data following the ATLAS computing model. GANGA supports tasks of user analysis with reconstructed data and small scale production of Monte Carlo data

  16. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  17. Progress and Challenges in Developing Reference Data Layers for Human Population Distribution and Built Infrastructure

    Science.gov (United States)

    Chen, R. S.; Yetman, G.; de Sherbinin, A. M.

    2015-12-01

    Understanding the interactions between environmental and human systems, and in particular supporting the applications of Earth science data and knowledge in place-based decision making, requires systematic assessment of the distribution and dynamics of human population and the built human infrastructure in conjunction with environmental variability and change. The NASA Socioeconomic Data and Applications Center (SEDAC) operated by the Center for International Earth Science Information Network (CIESIN) at Columbia University has had a long track record in developing reference data layers for human population and settlements and is expanding its efforts on topics such as intercity roads, reservoirs and dams, and energy infrastructure. SEDAC has set as a strategic priority the acquisition, development, and dissemination of data resources derived from remote sensing and socioeconomic data on urban land use change, including temporally and spatially disaggregated data on urban change and rates of change, the built infrastructure, and critical facilities. We report here on a range of past and ongoing activities, including the Global Human Settlements Layer effort led by the European Commission's Joint Research Centre (JRC), the Global Exposure Database for the Global Earthquake Model (GED4GEM) project, the Global Roads Open Access Data Working Group (gROADS) of the Committee on Data for Science and Technology (CODATA), and recent work with ImageCat, Inc. to improve estimates of the exposure and fragility of buildings, road and rail infrastructure, and other facilities with respect to selected natural hazards. New efforts such as the proposed Global Human Settlement indicators initiative of the Group on Earth Observations (GEO) could help fill critical gaps and link potential reference data layers with user needs. We highlight key sectors and themes that require further attention, and the many significant challenges that remain in developing comprehensive, high quality

  18. A microfluidic platform for the rapid determination of distribution coefficients by gravity assisted droplet-based liquid-liquid extraction

    DEFF Research Database (Denmark)

    Poulsen, Carl Esben; Wootton, Robert C. R.; Wolff, Anders

    2015-01-01

    The determination of pharmacokinetic properties of drugs, such as the distribution coefficient, D, is a crucial measurement in pharmaceutical research. Surprisingly, the conventional (gold standard) technique used for D measurements, the shake-flask method, is antiquated and unsuitable...... for the testing of valuable and scarce drug candidates. Herein we present a simple micro fluidic platform for the determination of distribution coefficients using droplet-based liquid-liquid extraction. For simplicity, this platform makes use of gravity to enable phase separation for analysis and is 48 times...... the apparent acid dissociation constant, pK', as a proxy for inter-system comparison. Our platform determines a pK' value of 7.24 ± 0.15, compared to 7.25 ± 0.58 for the shake-flask method in our hands and 7.21 for the shake-flask method in literature. Devices are fabricated using injection moulding, the batch...

  19. Development of Distributed Simulation Platform for Power Systems and Wind Farms

    DEFF Research Database (Denmark)

    Hu, Rui; Hu, Weihao; Chen, Zhe

    2015-01-01

    The study of wind power system strongly relies on simulations in all kinds of methods. In industry, the feasibility and efficiency of wind power projects also will be verified by simulations at first. However, taking time cost and economy into consideration, simulations in large scales often...... sacrifice model details or computing precision in order to gain acceptable results in higher simulating speed and lower hardware costs. To balance the contradiction of costs and performance, in this paper, a novel distributed simulation platform based on PC network and Matlab is proposed. Compared...

  20. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    Energy Technology Data Exchange (ETDEWEB)

    Gabert, Kasimir [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Burns, Ian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Elliott, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kallaher, Jenna [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vail, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model, either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.

  1. A platform independent communication library for distributed computing

    NARCIS (Netherlands)

    Groen, D.; Rieder, S.; Grosso, P.; de Laat, C.; Portegies Zwart, S.

    2010-01-01

    We present MPWide, a platform independent communication library for performing message passing between supercomputers. Our library couples several local MPI applications through a long distance network using, for example, optical links. The implementation is deliberately kept light-weight, platform

  2. Second report of the national platform electromobility; Zweiter Bericht der Nationalen Plattform Elektromobilitaet

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-05-15

    In the National Platform Electromobility, representatives from industry, science, politics, unions and societies in Germany agreed on a systematic, market-oriented and technology-open approach to become the main supplier and main market for electromobility up to the year 2020. A central component of the targeted German mean market for electromobility is an intelligent energy system: integration of electricity from renewable energy sources, demand-oriented development of a public charging infrastructure, development of an innovative charging structure. The common goal of the National Electromobility Platform is to build a self-supporting market for electric vehicles. The National Electromobility Platform will support the implementation of these intentions and measures. The underlying assumptions are reviewed annually. Derived recommendations are adapted, if necessary. Thus the National Electromobility Platform develops an annual progress report. With the end of the market scheduling phase by 2014, especially the market acceleration, demand for public infrastructure, costs and funding approaches for market acceleration as well as research and development will be re-evaluated.

  3. HEYCITY: A SOCIAL-ORIENTED APPLICATION AND PLATFORM ON THE CLOUD

    Directory of Open Access Journals (Sweden)

    Maximiliano Rasguido

    2015-07-01

    Full Text Available Every city has problems related to infrastructure, services or security. Unfortunately, most cities lack of an on-line service or platform where citizens can complain about those problems. Typically there is only an email address or phone number where they can call, but only during working hours, which often results in frustration and long processing time until local authorities can solve those problems. In this article, we present HeyCity, a social-oriented application and platform that addresses this problem, by making the citizen an active part of the solution, and therefore increases responsibility. HeyCity provides a technological answer where users can report problems using their smartphone and collaborate with other citizens and local authorities to solve the problems. To be able to handle a large number of users distributed in different cities, HeyCity was deployed on the Cloud. We describe the design, development, deployment and execution of HeyCity on state-of-the-art Cloud services and tools, and we describe the technical choices that we made.

  4. An E-government Interoperability Platform Supporting Personal Data Protection Regulations

    OpenAIRE

    González, Laura; Echevarría, Andrés; Morales, Dahiana; Ruggia, Raúl

    2016-01-01

    Public agencies are increasingly required to collaborate with each other in order to provide high-quality e-government services. This collaboration is usually based on the service-oriented approach and supported by interoperability platforms. Such platforms are specialized middleware-based infrastructures enabling the provision, discovery and invocation of interoperable software services. In turn, given that personal data handled by governments are often very sensitive, most governments have ...

  5. Two-Dimensional Key Table-Based Group Key Distribution in Advanced Metering Infrastructure

    OpenAIRE

    Woong Go; Jin Kawk

    2014-01-01

    A smart grid provides two-way communication by using the information and communication technology. In order to establish two-way communication, the advanced metering infrastructure (AMI) is used in the smart grid as the core infrastructure. This infrastructure consists of smart meters, data collection units, maintenance data management systems, and so on. However, potential security problems of the AMI increase owing to the application of the public network. This is because the transmitted in...

  6. Next-Generation Navigational Infrastructure and the ATLAS Event Store

    CERN Document Server

    van Gemmeren, P; The ATLAS collaboration; Nowak, M

    2014-01-01

    The ATLAS event store employs a persistence framework with extensive navigational capabilities. These include real-time back navigation to upstream processing stages, externalizable data object references, navigation from any data object to any other both within a single file and across files, and more. The 2013-2014 shutdown of the Large Hadron Collider provides an opportunity to enhance this infrastructure in several ways that both extend these capabilities and allow the collaboration to better exploit emerging computing platforms. Enhancements include redesign with efficient file merging in mind, content-based indices in optimized reference types, and support for forward references. The latter provide the potential to construct valid references to data before those data are written, a capability that is useful in a variety of multithreading, multiprocessing, distributed processing, and deferred processing scenarios. This paper describes the architecture and design of the next generation of ATLAS navigation...

  7. Next-Generation Navigational Infrastructure and the ATLAS Event Store

    CERN Document Server

    van Gemmeren, P; The ATLAS collaboration; Nowak, M

    2013-01-01

    The ATLAS event store employs a persistence framework with extensive navigational capabilities. These include real-time back navigation to upstream processing stages, externalizable data object references, navigation from any data object to any other both within a single file and across files, and more. The 2013-2014 shutdown of the Large Hadron Collider provides an opportunity to enhance this infrastructure in several ways that both extend these capabilities and allow the collaboration to better exploit emerging computing platforms. Enhancements include redesign with efficient file merging in mind, content-based indices in optimized reference types, and support for forward references. The latter provide the potential to construct valid references to data before those data are written, a capability that is useful in a variety of multithreading, multiprocessing, distributed processing, and deferred processing scenarios. This paper describes the architecture and design of the next generation of ATLAS navigation...

  8. BrainBrowser: distributed, web-based neurological data visualization

    Directory of Open Access Journals (Sweden)

    Tarek eSherif

    2015-01-01

    Full Text Available Recent years have seen massive, distributed datasets become the norm in neuroimaging research, and the methodologies used analyze them have, in response, become more collaborative and exploratory. Tools and infrastructure are continuously being developed and deployed to facilitate research in this context: grid computation platforms to process the data, distributed data stores to house and share them, high-speed networks to move them around and collaborative, often web-based, platforms to provide access to and sometimes manage the entire system. BrainBrowser is a lightweight, high-performance JavaScript visualization library built to provide easy-to-use, powerful, on-demand visualization of remote datasets in this new research environment. BrainBrowser leverages modern Web technologies, such as WebGL, HTML5 and Web Workers, to visualize 3D surface and volumetric neuroimaging data in any modern web browser without requiring any browser plugins. It is thus trivial to integrate BrainBrowser into any web-based platform. BrainBrowser is simple enough to produce a basic web-based visualization in a few lines of code, while at the same time being robust enough to create full-featured visualization applications. BrainBrowser can dynamically load the data required for a given visualization, so no network bandwidth needs to be waisted on data that will not be used. BrainBrowser's integration into the standardized web platform also allows users to consider using 3D data visualization in novel ways, such as for data distribution, data sharing and dynamic online publications. BrainBrowser is already being used in two major online platforms, CBRAIN and LORIS, and has been used to make the 1TB MACACC dataset openly accessible.

  9. Using and Designing Platforms for In Vivo Education Experiments

    OpenAIRE

    Williams, Joseph Jay; Ostrow, Korinn; Xiong, Xiaolu; Glassman, Elena; Kim, Juho; Maldonado, Samuel G.; Li, Na; Reich, Justin; Hefferman, Neil

    2015-01-01

    In contrast to typical laboratory experiments, the everyday use of online educational resources by large populations and the prevalence of software infrastructure for A/B testing leads us to consider how platforms can embed in vivo experiments that do not merely support research, but ensure practical improvements to their educational components. Examples are presented of randomized experimental comparisons conducted by subsets of the authors in three widely used online educational platforms K...

  10. Flexible Description and Adaptive Processing of Earth Observation Data through the BigEarth Platform

    Science.gov (United States)

    Gorgan, Dorian; Bacu, Victor; Stefanut, Teodor; Nandra, Cosmin; Mihon, Danut

    2016-04-01

    The Earth Observation data repositories extending periodically by several terabytes become a critical issue for organizations. The management of the storage capacity of such big datasets, accessing policy, data protection, searching, and complex processing require high costs that impose efficient solutions to balance the cost and value of data. Data can create value only when it is used, and the data protection has to be oriented toward allowing innovation that sometimes depends on creative people, which achieve unexpected valuable results through a flexible and adaptive manner. The users need to describe and experiment themselves different complex algorithms through analytics in order to valorize data. The analytics uses descriptive and predictive models to gain valuable knowledge and information from data analysis. Possible solutions for advanced processing of big Earth Observation data are given by the HPC platforms such as cloud. With platforms becoming more complex and heterogeneous, the developing of applications is even harder and the efficient mapping of these applications to a suitable and optimum platform, working on huge distributed data repositories, is challenging and complex as well, even by using specialized software services. From the user point of view, an optimum environment gives acceptable execution times, offers a high level of usability by hiding the complexity of computing infrastructure, and supports an open accessibility and control to application entities and functionality. The BigEarth platform [1] supports the entire flow of flexible description of processing by basic operators and adaptive execution over cloud infrastructure [2]. The basic modules of the pipeline such as the KEOPS [3] set of basic operators, the WorDeL language [4], the Planner for sequential and parallel processing, and the Executor through virtual machines, are detailed as the main components of the BigEarth platform [5]. The presentation exemplifies the development

  11. CERN printing infrastructure

    International Nuclear Information System (INIS)

    Otto, R; Sucik, J

    2008-01-01

    For many years CERN had a very sophisticated print server infrastructure [13] which supported several different protocols (AppleTalk, IPX and TCP/IP) and many different printing standards. Today's situation differs a lot: we have a much more homogenous network infrastructure, where TCP/IP is used everywhere and we have less printer models, which almost all work using current standards (i.e. they all provide PostScript drivers). This change gave us the possibility to review the printing architecture aiming at simplifying the infrastructure in order to achieve full automation of the service. The new infrastructure offers both: LPD service exposing print queues to Linux and Mac OS X computers and native printing for Windows based clients. The printer driver distribution is automatic and native on Windows and automated by custom mechanisms on Linux, where the appropriate Foomatic drivers are configured. Also the process of printer registration and queue creation is completely automated following the printer registration in the network database. At the end of 2006 we have moved all (∼1200) CERN printers and all users' connections at CERN to the new service. This paper will describe the new architecture and summarize the process of migration

  12. Policy Framework for the Next Generation Platform as a Service

    DEFF Research Database (Denmark)

    Kentis, Angelos Mimidis; Ollora Zaballa, Eder; Soler, José

    2018-01-01

    The Platform-as-a-Service (PaaS) model, allows service providers to build and deploy their services following streamlined work-flows. However, platforms deployed through the PaaS model can be very diverse in terms of technologies and involved subsystems (e.g. infrastructure, orchestration). Thus,......-wide and technology-agnostic policies to NGPaaS, by means of abstraction of the underlying platforms and the use of generic interfaces. The paper also presents a specific use case for the proposed framework, which targets network-oriented policies......., the means for deploying and managing a service can significantly vary depending on the deployed platform. To address this issue, this paper proposes a policy-based framework designed for the Next Generation Platform-as-a-Service (NGPaaS). This framework allows service providers to define platform...

  13. Public-private collaboration in spatial data infrastructure: Overview of exposure, acceptance and sharing platform in Malaysia

    Science.gov (United States)

    Othman, Raha binti; Bakar, Muhamad Shahbani Abu; Mahamud, Ku Ruhana Ku

    2017-10-01

    While Spatial Data Infrastructure (SDI) has been established in Malaysia, the full potential can be further realized. To a large degree, geospatial industry users are hopeful that they can easily get access to the system and start utilizing the data. Some users expect SDI to provide them with readily available data without the necessary steps of requesting the data from the data providers as well as the steps for them to process and to prepare the data for their use. Some further argued that the usability of the system can be improved if appropriate combination between data sharing and focused application is found within the services. In order to address the current challenges and to enhance the effectiveness of the SDI in Malaysia, there is possibility of establishing a collaborative business venture between public and private entities; thus can help addressing the issues and expectations. In this paper, we discussed the possibility of collaboration between these two entities. Interviews with seven entities are held to collect information on the exposure, acceptance and sharing of platform. The outcomes indicate that though the growth of GIS technology and the high level of technology acceptance provides a solid based for utilizing the geospatial data, the absence of concrete policy on data sharing, a quality geospatial data, an authority for coordinator agency, leaves a vacuum for the successful implementation of the SDI initiative.

  14. Developing a grid infrastructure in Cuba

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Aldama, D.; Dominguez, M.; Ricardo, H.; Gonzalez, A.; Nolasco, E.; Fernandez, E.; Fernandez, M.; Sanchez, M.; Suarez, F.; Nodarse, F.; Moreno, N.; Aguilera, L.

    2007-07-01

    A grid infrastructure was deployed at Centro de Gestion de la Informacion y Desarrollo de la Energia (CUBAENERGIA) in the frame of EELA project and of a national initiative for developing a Cuban Network for Science. A stand-alone model was adopted to overcome connectivity limitations. The e-infrastructure is based on gLite-3.0 middleware and is fully compatible with EELA-infrastructure. Afterwards, the work was focused on grid applications. The application GATE was deployed from the early beginning for biomedical users. Further, two applications were deployed on the local grid infrastructure: MOODLE for e-learning and AERMOD for assessment of local dispersion of atmospheric pollutants. Additionally, our local grid infrastructure was made interoperable with a Java based distributed system for bioinformatics calculations. This experience could be considered as a suitable approach for national networks with weak Internet connections. (Author)

  15. Using Distributed Data over HBase in Big Data Analytics Platform for Clinical Services

    Directory of Open Access Journals (Sweden)

    Dillon Chrimes

    2017-01-01

    Full Text Available Big data analytics (BDA is important to reduce healthcare costs. However, there are many challenges of data aggregation, maintenance, integration, translation, analysis, and security/privacy. The study objective to establish an interactive BDA platform with simulated patient data using open-source software technologies was achieved by construction of a platform framework with Hadoop Distributed File System (HDFS using HBase (key-value NoSQL database. Distributed data structures were generated from benchmarked hospital-specific metadata of nine billion patient records. At optimized iteration, HDFS ingestion of HFiles to HBase store files revealed sustained availability over hundreds of iterations; however, to complete MapReduce to HBase required a week (for 10 TB and a month for three billion (30 TB indexed patient records, respectively. Found inconsistencies of MapReduce limited the capacity to generate and replicate data efficiently. Apache Spark and Drill showed high performance with high usability for technical support but poor usability for clinical services. Hospital system based on patient-centric data was challenging in using HBase, whereby not all data profiles were fully integrated with the complex patient-to-hospital relationships. However, we recommend using HBase to achieve secured patient data while querying entire hospital volumes in a simplified clinical event model across clinical services.

  16. Using Distributed Data over HBase in Big Data Analytics Platform for Clinical Services.

    Science.gov (United States)

    Chrimes, Dillon; Zamani, Hamid

    2017-01-01

    Big data analytics (BDA) is important to reduce healthcare costs. However, there are many challenges of data aggregation, maintenance, integration, translation, analysis, and security/privacy. The study objective to establish an interactive BDA platform with simulated patient data using open-source software technologies was achieved by construction of a platform framework with Hadoop Distributed File System (HDFS) using HBase (key-value NoSQL database). Distributed data structures were generated from benchmarked hospital-specific metadata of nine billion patient records. At optimized iteration, HDFS ingestion of HFiles to HBase store files revealed sustained availability over hundreds of iterations; however, to complete MapReduce to HBase required a week (for 10 TB) and a month for three billion (30 TB) indexed patient records, respectively. Found inconsistencies of MapReduce limited the capacity to generate and replicate data efficiently. Apache Spark and Drill showed high performance with high usability for technical support but poor usability for clinical services. Hospital system based on patient-centric data was challenging in using HBase, whereby not all data profiles were fully integrated with the complex patient-to-hospital relationships. However, we recommend using HBase to achieve secured patient data while querying entire hospital volumes in a simplified clinical event model across clinical services.

  17. Using Distributed Data over HBase in Big Data Analytics Platform for Clinical Services

    Science.gov (United States)

    Zamani, Hamid

    2017-01-01

    Big data analytics (BDA) is important to reduce healthcare costs. However, there are many challenges of data aggregation, maintenance, integration, translation, analysis, and security/privacy. The study objective to establish an interactive BDA platform with simulated patient data using open-source software technologies was achieved by construction of a platform framework with Hadoop Distributed File System (HDFS) using HBase (key-value NoSQL database). Distributed data structures were generated from benchmarked hospital-specific metadata of nine billion patient records. At optimized iteration, HDFS ingestion of HFiles to HBase store files revealed sustained availability over hundreds of iterations; however, to complete MapReduce to HBase required a week (for 10 TB) and a month for three billion (30 TB) indexed patient records, respectively. Found inconsistencies of MapReduce limited the capacity to generate and replicate data efficiently. Apache Spark and Drill showed high performance with high usability for technical support but poor usability for clinical services. Hospital system based on patient-centric data was challenging in using HBase, whereby not all data profiles were fully integrated with the complex patient-to-hospital relationships. However, we recommend using HBase to achieve secured patient data while querying entire hospital volumes in a simplified clinical event model across clinical services. PMID:29375652

  18. Office of Aviation Safety Infrastructure -

    Data.gov (United States)

    Department of Transportation — The Office of Aviation Safety Infrastructure (AVS INF) provides authentication and access control to AVS network resources for users. This is done via a distributed...

  19. Web-GIS platform for monitoring and forecasting of regional climate and ecological changes

    Science.gov (United States)

    Gordov, E. P.; Krupchatnikov, V. N.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Shulgina, T. M.

    2012-12-01

    Growing volume of environmental data from sensors and model outputs makes development of based on modern information-telecommunication technologies software infrastructure for information support of integrated scientific researches in the field of Earth sciences urgent and important task (Gordov et al, 2012, van der Wel, 2005). It should be considered that original heterogeneity of datasets obtained from different sources and institutions not only hampers interchange of data and analysis results but also complicates their intercomparison leading to a decrease in reliability of analysis results. However, modern geophysical data processing techniques allow combining of different technological solutions for organizing such information resources. Nowadays it becomes a generally accepted opinion that information-computational infrastructure should rely on a potential of combined usage of web- and GIS-technologies for creating applied information-computational web-systems (Titov et al, 2009, Gordov et al. 2010, Gordov, Okladnikov and Titov, 2011). Using these approaches for development of internet-accessible thematic information-computational systems, and arranging of data and knowledge interchange between them is a very promising way of creation of distributed information-computation environment for supporting of multidiscipline regional and global research in the field of Earth sciences including analysis of climate changes and their impact on spatial-temporal vegetation distribution and state. Experimental software and hardware platform providing operation of a web-oriented production and research center for regional climate change investigations which combines modern web 2.0 approach, GIS-functionality and capabilities of running climate and meteorological models, large geophysical datasets processing, visualization, joint software development by distributed research groups, scientific analysis and organization of students and post-graduate students education is

  20. Elements for a Geopolitics of Infrastructure Megaprojects in Latin America and Colombia

    Directory of Open Access Journals (Sweden)

    Fabio Vladimir Sánchez Calderón

    2008-01-01

    Full Text Available This paper develops a critical approach to projects and initiatives of supranational physical connection through infrastructure improvement for transport and energy, regarding specifically their socio-spatial implications at the national and local levels. With this purpose, we examined how infrastructure projects in Colombia are linked to two sub-continental initiatives that seek to build a physical platform for the region: the Plan Puebla Panama (PPP for Central America (including Colombia, and the Initiative for Regional Integration of South America (IIRSA.

  1. Towards a single seismological service infrastructure in Europe

    Science.gov (United States)

    Spinuso, A.; Trani, L.; Frobert, L.; Van Eck, T.

    2012-04-01

    In the last five year services and data providers, within the seismological community in Europe, focused their efforts in migrating the way of opening their archives towards a Service Oriented Architecture (SOA). This process tries to follow pragmatically the technological trends and available solutions aiming at effectively improving all the data stewardship activities. These advancements are possible thanks to the cooperation and the follow-ups of several EC infrastructural projects that, by looking at general purpose techniques, combine their developments envisioning a multidisciplinary platform for the earth observation as the final common objective (EPOS, Earth Plate Observation System) One of the first results of this effort is the Earthquake Data Portal (http://www.seismicportal.eu), which provides a collection of tools to discover, visualize and access a variety of seismological data sets like seismic waveform, accelerometric data, earthquake catalogs and parameters. The Portal offers a cohesive distributed search environment, linking data search and access across multiple data providers through interactive web-services, map-based tools and diverse command-line clients. Our work continues under other EU FP7 projects. Here we will address initiatives in two of those projects. The NERA, (Network of European Research Infrastructures for Earthquake Risk Assessment and Mitigation) project will implement a Common Services Architecture based on OGC services APIs, in order to provide Resource-Oriented common interfaces across the data access and processing services. This will improve interoperability between tools and across projects, enabling the development of higher-level applications that can uniformly access the data and processing services of all participants. This effort will be conducted jointly with the VERCE project (Virtual Earthquake and Seismology Research Community for Europe). VERCE aims to enable seismologists to exploit the wealth of seismic data

  2. Health-e-Child: a grid platform for european paediatrics

    International Nuclear Information System (INIS)

    Skaburskas, K; Estrella, F; Shade, J; Manset, D; Revillard, J; Rios, A; Anjum, A; Branson, A; Bloodsworth, P; Hauer, T; McClatchey, R; Rogulin, D

    2008-01-01

    The Health-e-Child (HeC) project [1], [2] is an EC Framework Programme 6 Integrated Project that aims to develop a grid-based integrated healthcare platform for paediatrics. Using this platform biomedical informaticians will integrate heterogeneous data and perform epidemiological studies across Europe. The resulting Grid enabled biomedical information platform will be supported by robust search, optimization and matching techniques for information collected in hospitals across Europe. In particular, paediatricians will be provided with decision support, knowledge discovery and disease modelling applications that will access data in hospitals in the UK, Italy and France, integrated via the Grid. For economy of scale, reusability, extensibility, and maintainability, HeC is being developed on top of an EGEE/gLite [3] based infrastructure that provides all the common data and computation management services required by the applications. This paper discusses some of the major challenges in bio-medical data integration and indicates how these will be resolved in the HeC system. HeC is presented as an example of how computer science (and, in particular Grid infrastructures) originating from high energy physics can be adapted for use by biomedical informaticians to deliver tangible real-world benefits

  3. Strategy for sustainability of the Joint European Research Infrastructure Network for Coastal Observatories - JERICO

    OpenAIRE

    Puillat, Ingrid; Farcy, Patrick; Durand, Dominique; Petihakis, George; Morin, Pascal; Kriegger, Magali; Petersen, Wilhelm; Tintoré, Joaquin; Sorensen, Kai; Sparnocchia, Stefania; Wehde, Henning

    2015-01-01

    The JERICO European research infrastructure (RI) is integrating several platform types i.e. fixed buoys, piles, moorings, drifters, Ferryboxes, gliders, HF radars, coastal cable observatories and the associated technologies dedicated to the observation and monitoring of the European coastal seas. The infrastructure is to serve both the implementation of European marine policies and the elucidation of key scientific questions through dedicated observation and monitoring plans. It includes obse...

  4. The Milan Project: A New Method for High-Assurance and High-Performance Computing on Large-Scale Distributed Platforms

    National Research Council Canada - National Science Library

    Kedem, Zvi

    2000-01-01

    ...: Calypso, Chime, and Charlotte; which enable applications developed for ideal, shared memory, parallel machines to execute on distributed platforms that are subject to failures, slowdowns, and changing resource availability...

  5. A virtual computing infrastructure for TS-CV SCADA systems

    CERN Document Server

    Poulsen, S

    2008-01-01

    In modern data centres, it is an emerging trend to operate and manage computers as software components or logical resources and not as physical machines. This technique is known as â€ワvirtualisation” and the new computers are referred to as â€ワvirtual machines” (VMs). Multiple VMs can be consolidated on a single hardware platform and managed in ways that are not possible with physical machines. However, this is not yet widely practiced for control system deployment. In TS-CV, a collection of VMs or a â€ワvirtual infrastructure” is installed since 2005 for SCADA systems, PLC program development, and alarm transmission. This makes it possible to consolidate distributed, heterogeneous operating systems and applications on a limited number of standardised high-performance servers in the Central Control Room (CCR). More generally, virtualisation assists in offering continuous computing services for controls and maintaining performance and assuring quality. Implementing our systems in a vi...

  6. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  7. Designing a graph-based approach to landscape ecological assessment of linear infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Girardet, Xavier, E-mail: xavier.girardet@univ-fcomte.fr; Foltête, Jean-Christophe, E-mail: jean-christophe.foltete@univ-fcomte.fr; Clauzel, Céline, E-mail: celine.clauzel@univ-fcomte.fr

    2013-09-15

    The development of major linear infrastructures contributes to landscape fragmentation and impacts natural habitats and biodiversity in various ways. To anticipate and minimize such impacts, landscape planning needs to be capable of effective strategic environmental assessment (SEA) and of supporting environmental impact assessment (EIA) decisions. To this end, species distribution models (SDMs) are an effective way of making predictive maps of the presence of a given species. In this paper, we propose to combine SDMs and graph-based representation of landscape networks to integrate the potential long-distance effect of infrastructures on species distribution. A diachronic approach, comparing distribution before and after the linear infrastructure is constructed, leads to the design of a species distribution assessment (SDA), taking into account population isolation. The SDA makes it possible (1) to estimate the local variation in probability of presence and (2) to characterize the impact of the infrastructure in terms of global variation in presence and of distance of disturbance. The method is illustrated by assessing the impact of the construction of a high-speed railway line on the distribution of several virtual species in Franche-Comté (France). The study shows the capacity of the SDA to characterize the impact of a linear infrastructure either as a research concern or as a spatial planning challenge. SDAs could be helpful in deciding among several scenarios for linear infrastructure routes or for the location of mitigation measures. -- Highlights: • Graph connectivity metrics were integrated into a species distribution model. • SDM was performed before and after the implementation of linear infrastructure. • The local variation of presence provides spatial indicators of the impact.

  8. Designing a graph-based approach to landscape ecological assessment of linear infrastructures

    International Nuclear Information System (INIS)

    Girardet, Xavier; Foltête, Jean-Christophe; Clauzel, Céline

    2013-01-01

    The development of major linear infrastructures contributes to landscape fragmentation and impacts natural habitats and biodiversity in various ways. To anticipate and minimize such impacts, landscape planning needs to be capable of effective strategic environmental assessment (SEA) and of supporting environmental impact assessment (EIA) decisions. To this end, species distribution models (SDMs) are an effective way of making predictive maps of the presence of a given species. In this paper, we propose to combine SDMs and graph-based representation of landscape networks to integrate the potential long-distance effect of infrastructures on species distribution. A diachronic approach, comparing distribution before and after the linear infrastructure is constructed, leads to the design of a species distribution assessment (SDA), taking into account population isolation. The SDA makes it possible (1) to estimate the local variation in probability of presence and (2) to characterize the impact of the infrastructure in terms of global variation in presence and of distance of disturbance. The method is illustrated by assessing the impact of the construction of a high-speed railway line on the distribution of several virtual species in Franche-Comté (France). The study shows the capacity of the SDA to characterize the impact of a linear infrastructure either as a research concern or as a spatial planning challenge. SDAs could be helpful in deciding among several scenarios for linear infrastructure routes or for the location of mitigation measures. -- Highlights: • Graph connectivity metrics were integrated into a species distribution model. • SDM was performed before and after the implementation of linear infrastructure. • The local variation of presence provides spatial indicators of the impact

  9. CERN printing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Otto, R; Sucik, J [CERN, Geneva (Switzerland)], E-mail: Rafal.Otto@cern.ch, E-mail: Juraj.Sucik@cern.ch

    2008-07-15

    For many years CERN had a very sophisticated print server infrastructure [13] which supported several different protocols (AppleTalk, IPX and TCP/IP) and many different printing standards. Today's situation differs a lot: we have a much more homogenous network infrastructure, where TCP/IP is used everywhere and we have less printer models, which almost all work using current standards (i.e. they all provide PostScript drivers). This change gave us the possibility to review the printing architecture aiming at simplifying the infrastructure in order to achieve full automation of the service. The new infrastructure offers both: LPD service exposing print queues to Linux and Mac OS X computers and native printing for Windows based clients. The printer driver distribution is automatic and native on Windows and automated by custom mechanisms on Linux, where the appropriate Foomatic drivers are configured. Also the process of printer registration and queue creation is completely automated following the printer registration in the network database. At the end of 2006 we have moved all ({approx}1200) CERN printers and all users' connections at CERN to the new service. This paper will describe the new architecture and summarize the process of migration.

  10. Article I. Multi-platform Automated Software Building and Packaging

    International Nuclear Information System (INIS)

    Rodriguez, A Abad; Gomes Gouveia, V E; Meneses, D; Capannini, F; Aimar, A; Di Meglio, A

    2012-01-01

    One of the major goals of the EMI (European Middleware Initiative) project is the integration of several components of the pre-existing middleware (ARC, gLite, UNICORE and dCache) into a single consistent set of packages with uniform distributions and repositories. Those individual middleware projects have been developed in the last decade by tens of development teams and before EMI were all built and tested using different tools and dedicated services. The software, millions of lines of code, is written in several programming languages and supports multiple platforms. Therefore a viable solution ought to be able to build and test applications on multiple programming languages using common dependencies on all selected platforms. It should, in addition, package the resulting software in formats compatible with the popular Linux distributions, such as Fedora and Debian, and store them in repositories from which all EMI software can be accessed and installed in a uniform way. Despite this highly heterogeneous initial situation, a single common solution, with the aim of quickly automating the integration of the middleware products, had to be selected and implemented in a few months after the beginning of the EMI project. Because of the previous knowledge and the short time available in which to provide this common solution, the ETICS service, where the gLite middleware was already built for years, was selected. This contribution describes how the team in charge of providing a common EMI build and packaging infrastructure to the whole project has developed a homogeneous solution for releasing and packaging the EMI components from the initial set of tools used by the earlier middleware projects. An important element of the presentation is the developers experience and feedback on converging on ETICS and on the on-going work in order to finally add more widely used and supported build and packaging solutions of the Linux platforms

  11. ICT-infrastructures for hydrometeorology science and natural disaster societal impact assessment: the DRIHMS project

    Science.gov (United States)

    Parodi, A.; Craig, G. C.; Clematis, A.; Kranzlmueller, D.; Schiffers, M.; Morando, M.; Rebora, N.; Trasforini, E.; D'Agostino, D.; Keil, K.

    2010-09-01

    Hydrometeorological science has made strong progress over the last decade at the European and worldwide level: new modeling tools, post processing methodologies and observational data and corresponding ICT (Information and Communication Technology) technologies are available. Recent European efforts in developing a platform for e-Science, such as EGEE (Enabling Grids for E-sciencE), SEEGRID-SCI (South East Europe GRID e-Infrastructure for regional e-Science), and the German C3-Grid, have demonstrated their abilities to provide an ideal basis for the sharing of complex hydrometeorological data sets and tools. Despite these early initiatives, however, the awareness of the potential of the Grid technology as a catalyst for future hydrometeorological research is still low and both the adoption and the exploitation have astonishingly been slow, not only within individual EC member states, but also on a European scale. With this background in mind and the fact that European ICT-infrastructures are in the progress of transferring to a sustainable and permanent service utility as underlined by the European Grid Initiative (EGI) and the Partnership for Advanced Computing in Europe (PRACE), the Distributed Research Infrastructure for Hydro-Meteorology Study (DRIHMS, co-Founded by the EC under the 7th Framework Programme) project has been initiated. The goal of DRIHMS is the promotion of the Grids in particular and e-Infrastructures in general within the European hydrometeorological research (HMR) community through the diffusion of a Grid platform for e-collaboration in this earth science sector: the idea is to further boost European research excellence and competitiveness in the fields of hydrometeorological research and Grid research by bridging the gaps between these two scientific communities. Furthermore the project is intended to transfer the results to areas beyond the strict hydrometeorology science as a support for the assessment of the effects of extreme

  12. The Importance of Biodiversity E-infrastructures for Megadiverse Countries.

    Science.gov (United States)

    Canhos, Dora A L; Sousa-Baena, Mariane S; de Souza, Sidnei; Maia, Leonor C; Stehmann, João R; Canhos, Vanderlei P; De Giovanni, Renato; Bonacelli, Maria B M; Los, Wouter; Peterson, A Townsend

    2015-07-01

    Addressing the challenges of biodiversity conservation and sustainable development requires global cooperation, support structures, and new governance models to integrate diverse initiatives and achieve massive, open exchange of data, tools, and technology. The traditional paradigm of sharing scientific knowledge through publications is not sufficient to meet contemporary demands that require not only the results but also data, knowledge, and skills to analyze the data. E-infrastructures are key in facilitating access to data and providing the framework for collaboration. Here we discuss the importance of e-infrastructures of public interest and the lack of long-term funding policies. We present the example of Brazil's speciesLink network, an e-infrastructure that provides free and open access to biodiversity primary data and associated tools. SpeciesLink currently integrates 382 datasets from 135 national institutions and 13 institutions from abroad, openly sharing ~7.4 million records, 94% of which are associated to voucher specimens. Just as important as the data is the network of data providers and users. In 2014, more than 95% of its users were from Brazil, demonstrating the importance of local e-infrastructures in enabling and promoting local use of biodiversity data and knowledge. From the outset, speciesLink has been sustained through project-based funding, normally public grants for 2-4-year periods. In between projects, there are short-term crises in trying to keep the system operational, a fact that has also been observed in global biodiversity portals, as well as in social and physical sciences platforms and even in computing services portals. In the last decade, the open access movement propelled the development of many web platforms for sharing data. Adequate policies unfortunately did not follow the same tempo, and now many initiatives may perish.

  13. Design of a Distributed Food Traceability Platform and Its Application in Food Traceability at Guangdong Province

    Directory of Open Access Journals (Sweden)

    Luo Haibiao

    2017-01-01

    Full Text Available Food traceability is an important measure to secure food safety. This paper designed a food traceability platform based on distribution framework and implemented it in Guangdong province. The platform can provide traceability service, production and management service for food enterprise, provide forward and backward traceability of the whole cycle of food production and circulation, and provide various methods of food traceability for public. One characteristic of the platform is that it opens up the data flow among production, circulation and supervising departments, and builds a unified commodity circulation data pool. Based on the flow data pool, not only the production and circulation information of the food product can be traced, but also its inspection and quarantine information. Another characteristic of the platform is that its database and data interface were developed based on the fool electronic traceability standards formulated by the National Food and Drug Administration. Its interface standardization and compatibility with other food traceability platforms can thus be guaranteed. The platform is running at Guangdong province for key supervising products of Infant formula foods (including milk powder, rice flour, farina, etc, editable oil and liquor. The public can use the Guangdong food traceability portal, mobile APP, Wechat or the self-service terminals in the supermarkets to trace food products by scanning or input its traceability code or its product code and verify its authenticity. It will help to promote consumer confidence in food safety.

  14. Geographic Concentration of Oil Infrastructure: Issues and Options

    Science.gov (United States)

    2007-03-24

    along the Gulf of Mexico. 28 Figure 3 reveals the travel pattern of the two hurricanes and the scope of impact on oil and natural gas platforms...road to travel before this level of cooperation is achieved. 15 Consideration of this issue of geographic concentration of oil infrastructure...million), Port Security Grants ($201 million), Intercity Bus Security Grants ($12 million), Trucking Security Grants ($12 million), and Buffer Zone

  15. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    Science.gov (United States)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  16. A multi VO Grid infrastructure at DESY

    International Nuclear Information System (INIS)

    Gellrich, Andreas

    2010-01-01

    As a centre for research with particle accelerators and synchrotron light, DESY operates a Grid infrastructure in the context of the EU-project EGEE and the national Grid initiative D-GRID. All computing and storage resources are located in one Grid infrastructure which supports a number of Virtual Organizations of different disciplines, including non-HEP groups such as the Photon Science community. Resource distribution is based on fair share methods without dedicating hardware to user groups. Production quality of the infrastructure is guaranteed by embedding it into the DESY computer centre.

  17. Contributing to global computing platform: gliding, tunneling standard services and high energy physics application

    International Nuclear Information System (INIS)

    Lodygensky, O.

    2006-09-01

    Centralized computers have been replaced by 'client/server' distributed architectures which are in turn in competition with new distributed systems known as 'peer to peer'. These new technologies are widely spread, and trading, industry and the research world have understood the new goals involved and massively invest around these new technologies, named 'grid'. One of the fields is about calculating. This is the subject of the works presented here. At the Paris Orsay University, a synergy emerged between the Computing Science Laboratory (LRI) and the Linear Accelerator Laboratory (LAL) on grid infrastructure, opening new investigations fields for the first and new high computing perspective for the other. Works presented here are the results of this multi-discipline collaboration. They are based on XtremWeb, the LRI global computing platform. We first introduce a state of the art of the large scale distributed systems, its principles, its architecture based on services. We then introduce XtremWeb and detail modifications and improvements we had to specify and implement to achieve our goals. We present two different studies, first interconnecting grids in order to generalize resource sharing and secondly, be able to use legacy services on such platforms. We finally explain how a research community like the community of high energy cosmic radiation detection can gain access to these services and detail Monte Carlos and data analysis processes over the grids. (author)

  18. Modeling the effect of urban infrastructure on hydrologic processes within i-Tree Hydro, a statistically and spatially distributed model

    Science.gov (United States)

    Taggart, T. P.; Endreny, T. A.; Nowak, D.

    2014-12-01

    Gray and green infrastructure in urban environments alters many natural hydrologic processes, creating an urban water balance unique to the developed environment. A common way to assess the consequences of impervious cover and grey infrastructure is by measuring runoff hydrographs. This focus on the watershed outlet masks the spatial variation of hydrologic process alterations across the urban environment in response to localized landscape characteristics. We attempt to represent this spatial variation in the urban environment using the statistically and spatially distributed i-Tree Hydro model, a scoping level urban forest effects water balance model. i-Tree Hydro has undergone expansion and modification to include the effect of green infrastructure processes, road network attributes, and urban pipe system leakages. These additions to the model are intended to increase the understanding of the altered urban hydrologic cycle by examining the effects of the location of these structures on the water balance. Specifically, the effect of these additional structures and functions on the spatially varying properties of interception, soil moisture and runoff generation. Differences in predicted properties and optimized parameter sets between the two models are examined and related to the recent landscape modifications. Datasets used in this study consist of watersheds and sewersheds within the Syracuse, NY metropolitan area, an urban area that has integrated green and gray infrastructure practices to alleviate stormwater problems.

  19. Implementation of a European e-Infrastructure for the 21st Century

    CERN Document Server

    Jones, Bob; Bird, Ian; Hemmer, Frédéric

    2013-01-01

    This document proposes an implementation plan for the vision of an e-infrastructure as described in “A Vision for a European e-Infrastructure for the 21st Century”. The objective of the implementation plan is to put in place the e-infrastructure commons that will enable digital science by introducing IT as a service to the public research sector in Europe. The rationale calls for a hybrid model that brings together public and commercial service suppliers to build a network of Centres of Excellence offering a range of services to a wide user base. The platform will make use of and cooperate with existing European e-infrastructures by jointly offering integrated services to the end-user. This hybrid model represents a significant change from the status-quo and will bring benefits for the stakeholders: end-users, research organisations, service providers (public and commercial) and funding agencies. Centres of Excellence can be owned and operated by a mixture of commercial companies and public organisations...

  20. Distributed Sensor Network Software Development Testing through Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Brennan, Sean M. [Univ. of New Mexico, Albuquerque, NM (United States)

    2003-12-01

    The distributed sensor network (DSN) presents a novel and highly complex computing platform with dif culties and opportunities that are just beginning to be explored. The potential of sensor networks extends from monitoring for threat reduction, to conducting instant and remote inventories, to ecological surveys. Developing and testing for robust and scalable applications is currently practiced almost exclusively in hardware. The Distributed Sensors Simulator (DSS) is an infrastructure that allows the user to debug and test software for DSNs independent of hardware constraints. The exibility of DSS allows developers and researchers to investigate topological, phenomenological, networking, robustness and scaling issues, to explore arbitrary algorithms for distributed sensors, and to defeat those algorithms through simulated failure. The user speci es the topology, the environment, the application, and any number of arbitrary failures; DSS provides the virtual environmental embedding.

  1. SeaDataNet - Pan-European infrastructure for marine and ocean data management: Unified access to distributed data sets (www.seadatanet.org)

    Science.gov (United States)

    Schaap, Dick M. A.; Maudire, Gilbert

    2010-05-01

    SeaDataNet is a leading infrastructure in Europe for marine & ocean data management. It is actively operating and further developing a Pan-European infrastructure for managing, indexing and providing access to ocean and marine data sets and data products, acquired via research cruises and other observational activities, in situ and remote sensing. The basis of SeaDataNet is interconnecting 40 National Oceanographic Data Centres and Marine Data Centers from 35 countries around European seas into a distributed network of data resources with common standards for metadata, vocabularies, data transport formats, quality control methods and flags, and access. Thereby most of the NODC's operate and/or are developing national networks to other institutes in their countries to ensure national coverage and long-term stewardship of available data sets. The majority of data managed by SeaDataNet partners concerns physical oceanography, marine chemistry, hydrography, and a substantial volume of marine biology and geology and geophysics. These are partly owned by the partner institutes themselves and for a major part also owned by other organizations from their countries. The SeaDataNet infrastructure is implemented with support of the EU via the EU FP6 SeaDataNet project to provide the Pan-European data management system adapted both to the fragmented observation system and the users need for an integrated access to data, meta-data, products and services. The SeaDataNet project has a duration of 5 years and started in 2006, but builds upon earlier data management infrastructure projects, undertaken over a period of 20 years by an expanding network of oceanographic data centres from the countries around all European seas. Its predecessor project Sea-Search had a strict focus on metadata. SeaDataNet maintains significant interest in the further development of the metadata infrastructure, extending its services with the provision of easy data access and generic data products

  2. Digital intermediaries and cultural industries: the developing influence of distribution platforms

    Directory of Open Access Journals (Sweden)

    Thomas Guignard

    2014-12-01

    Full Text Available The last few years have seen the generalization of a communicative device which is technical and organizational integration of a terminal, an operating system, a network connection, an online platform to access applications (contents and services. Indeed, smartphones, connected TVs, tablet computers and game consoles are structured by this model which develops itself thanks to the rise of computer performance and capacity telecommunication networks. Single technical interface for users, they show, for industrial actors involved, a mode of organization cannot be reduced to a commercial intermediation. So at the heart of this configuration are the "platforms" which are a new form of goods and services distribution and carry out the renewal of uses and related practices and a change in the value chain which are in the depths of effective change, hopes and fears aroused by the devices presented in our contribution.    Our paper aims at exposing the evolution of these devices in several fields of cultural industries especially mobile and audiovisual sectors with a focus on smartphone and connected TV industry. It intends thereby to question the double dialectic in these sectors in reconfiguration (integration / disintegration activities on the one hand and disintermediation / re-intermediation on the other hand.

  3. Safety-critical Java for low-end embedded platforms

    DEFF Research Database (Denmark)

    Søndergaard, Hans; Korsholm, Stephan E.; Ravn, Anders Peter

    2012-01-01

    We present an implementation of the Safety-Critical Java profile (SCJ), targeted for low-end embedded platforms with as little as 16 kB RAM and 256 kB flash. The distinctive features of the implementation are a combination of a lean Java virtual machine (HVM), with a bare metal kernel implementing...... hardware objects, first level interrupt handlers, and native variables, and an infrastructure written in Java which is minimized through program specialization. The HVM allows the implementation to be easily ported to embedded platforms which have a C compiler as part of the development environment...

  4. A Multifunctional Public Lighting Infrastructure, Design and Experimental Test

    Directory of Open Access Journals (Sweden)

    Marco Beccali

    2017-12-01

    Full Text Available Nowadays, the installation of efficient lighting sources and Information and Communications Technologies can provide economic benefits, energy efficiency, and visual comfort requirements. More advantages can be derived if the public lighting infrastructure integrates a smart grid. This study presents an experimental multifunctional infrastructure for public lighting, installed in Palermo. The system is able to provide smart lighting functions (hotspot Wi-Fi, video-surveillances, car and pedestrian access control, car parking monitoring and support for environmental monitoring. A remote control and monitoring platform called “Centro Servizi” processes the information coming from different installations as well as their status in real time, and sends commands to the devices (e.g. to control the luminous flux, each one provided with a machine to machine interface. Data can be reported either on the web or on a customised app. The study has shown the efficient operation of such new infrastructure and its capability to provide new functions and benefits to citizens, tourists, and public administration. Thus, this system represents a starting point for the implementation of many other lighting infrastructure features typical of a “smart city.”

  5. A central continuous integration platform: Agile Infrastructure use case and future plans

    CERN Multimedia

    CERN. Geneva; ANDERSEN, Terje; GEORGIOU, Stefanos

    2014-01-01

    We shall describe the use of Jenkins as a CI solution by the Configuration Team and present the requirements and plans for a central CI platform, as well as the associated challenges and possible solutions.

  6. Towards A Grid Infrastructure For Hydro-Meteorological Research

    Directory of Open Access Journals (Sweden)

    Michael Schiffers

    2011-01-01

    Full Text Available The Distributed Research Infrastructure for Hydro-Meteorological Study (DRIHMS is a coordinatedaction co-funded by the European Commission. DRIHMS analyzes the main issuesthat arise when designing and setting up a pan-European Grid-based e-Infrastructure for researchactivities in the hydrologic and meteorological fields. The main outcome of the projectis represented first by a set of Grid usage patterns to support innovative hydro-meteorologicalresearch activities, and second by the implications that such patterns define for a dedicatedGrid infrastructure and the respective Grid architecture.

  7. Energy-Efficient Cooperative Techniques for Infrastructure-to-Vehicle Communications

    OpenAIRE

    Nguyen , Tuan-Duc; Berder , Olivier; Sentieys , Olivier

    2011-01-01

    International audience; In wireless distributed networks, cooperative relay and cooperative Multi-Input Multi-Output (MIMO) techniques can be used to exploit the spatial and temporal diversity gain in order to increase the performance or reduce the transmission energy consumption. The energy efficiency of cooperative MIMO and relay techniques is then very useful for the Infrastructure to Vehicle (I2V) and Infrastructure to Infrastructure (I2I) communications in Intelligent Transport Systems (I...

  8. Auscope: Australian Earth Science Information Infrastructure using Free and Open Source Software

    Science.gov (United States)

    Woodcock, R.; Cox, S. J.; Fraser, R.; Wyborn, L. A.

    2013-12-01

    Since 2005 the Australian Government has supported a series of initiatives providing researchers with access to major research facilities and information networks necessary for world-class research. Starting with the National Collaborative Research Infrastructure Strategy (NCRIS) the Australian earth science community established an integrated national geoscience infrastructure system called AuScope. AuScope is now in operation, providing a number of components to assist in understanding the structure and evolution of the Australian continent. These include the acquisition of subsurface imaging , earth composition and age analysis, a virtual drill core library, geological process simulation, and a high resolution geospatial reference framework. To draw together information from across the earth science community in academia, industry and government, AuScope includes a nationally distributed information infrastructure. Free and Open Source Software (FOSS) has been a significant enabler in building the AuScope community and providing a range of interoperable services for accessing data and scientific software. A number of FOSS components have been created, adopted or upgraded to create a coherent, OGC compliant Spatial Information Services Stack (SISS). SISS is now deployed at all Australian Geological Surveys, many Universities and the CSIRO. Comprising a set of OGC catalogue and data services, and augmented with new vocabulary and identifier services, the SISS provides a comprehensive package for organisations to contribute their data to the AuScope network. This packaging and a variety of software testing and documentation activities enabled greater trust and notably reduced barriers to adoption. FOSS selection was important, not only for technical capability and robustness, but also for appropriate licensing and community models to ensure sustainability of the infrastructure in the long term. Government agencies were sensitive to these issues and Au

  9. Approche SIG pour une analyse spatiale des infrastructures ...

    African Journals Online (AJOL)

    SARAH

    31 janv. 2014 ... SIG jouent un rôle primordial dans l'implantation, le suivi et la gestion des infrastructures hydrauliques. L'utilisation de ces outils peut atténuer les difficultés d'approvisionnement en eau. Mots clés : SIG, distribution spatiale, infrastructure hydraulique, Zè, Bénin. ... accès à un dispositif d'assainissement de.

  10. A Distributed, Open Source based Data Infrastructure for the Megacities Carbon Project

    Science.gov (United States)

    Verma, R.; Crichton, D. J.; Duren, R. M.; Salameh, P.; Sparks, A.; Sloop, C.

    2014-12-01

    With the goal of assessing the anthropogenic carbon-emission impact of urban centers on local and global climates, the Megacities Carbon Project has been building carbon-monitoring capabilities for the past two years around the Los Angeles metropolitan area. Hundreds of megabytes (MB) of data are generated daily, and distributed among data centers local to the sensor networks involved. We automatically pull this remotely generated data into a centralized data infrastructure local to the Jet Propulsion Laboratory (JPL), seeking to (1) provide collaboration opportunities on the data, and (2) generate refined data products through community-requested centralized data processing pipelines. The goal of this informatics effort is to ensure near real-time access to generated data products across the Los Angeles carbon monitoring sensor network and meet the data analysis needs of carbon researchers through the production of customized products. We discuss the goals of the informatics effort, its uniqueness, and assess its effectiveness in providing an insight into the carbon sphere of Los Angeles.

  11. FOSS Tools for Research Infrastructures - A Success Story?

    Science.gov (United States)

    Stender, V.; Schroeder, M.; Wächter, J.

    2015-12-01

    Established initiatives and mandated organizations, e.g. the Initiative for Scientific Cyberinfrastructures (NSF, 2007) or the European Strategy Forum on Research Infrastructures (ESFRI, 2008), promote and foster the development of sustainable research infrastructures. The basic idea behind these infrastructures is the provision of services supporting scientists to search, visualize and access data, to collaborate and exchange information, as well as to publish data and other results. Especially the management of research data is gaining more and more importance. In geosciences these developments have to be merged with the enhanced data management approaches of Spatial Data Infrastructures (SDI). The Centre for GeoInformationTechnology (CeGIT) at the GFZ German Research Centre for Geosciences has the objective to establish concepts and standards of SDIs as an integral part of research infrastructure architectures. In different projects, solutions to manage research data for land- and water management or environmental monitoring have been developed based on a framework consisting of Free and Open Source Software (FOSS) components. The framework provides basic components supporting the import and storage of data, discovery and visualization as well as data documentation (metadata). In our contribution, we present our data management solutions developed in three projects, Central Asian Water (CAWa), Sustainable Management of River Oases (SuMaRiO) and Terrestrial Environmental Observatories (TERENO) where FOSS components build the backbone of the data management platform. The multiple use and validation of tools helped to establish a standardized architectural blueprint serving as a contribution to Research Infrastructures. We examine the question of whether FOSS tools are really a sustainable choice and whether the increased efforts of maintenance are justified. Finally it should help to answering the question if the use of FOSS for Research Infrastructures is a

  12. Geospatial Applications on Different Parallel and Distributed Systems in enviroGRIDS Project

    Science.gov (United States)

    Rodila, D.; Bacu, V.; Gorgan, D.

    2012-04-01

    The execution of Earth Science applications and services on parallel and distributed systems has become a necessity especially due to the large amounts of Geospatial data these applications require and the large geographical areas they cover. The parallelization of these applications comes to solve important performance issues and can spread from task parallelism to data parallelism as well. Parallel and distributed architectures such as Grid, Cloud, Multicore, etc. seem to offer the necessary functionalities to solve important problems in the Earth Science domain: storing, distribution, management, processing and security of Geospatial data, execution of complex processing through task and data parallelism, etc. A main goal of the FP7-funded project enviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is the development of a Spatial Data Infrastructure targeting this catchment region but also the development of standardized and specialized tools for storing, analyzing, processing and visualizing the Geospatial data concerning this area. For achieving these objectives, the enviroGRIDS deals with the execution of different Earth Science applications, such as hydrological models, Geospatial Web services standardized by the Open Geospatial Consortium (OGC) and others, on parallel and distributed architecture to maximize the obtained performance. This presentation analysis the integration and execution of Geospatial applications on different parallel and distributed architectures and the possibility of choosing among these architectures based on application characteristics and user requirements through a specialized component. Versions of the proposed platform have been used in enviroGRIDS project on different use cases such as: the execution of Geospatial Web services both on Web and Grid infrastructures [2] and the execution of SWAT hydrological models both on Grid and Multicore architectures [3]. The current

  13. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    Science.gov (United States)

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  14. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    Directory of Open Access Journals (Sweden)

    Stefan Bosse

    2015-02-01

    Full Text Available Multi-agent systems (MAS can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  15. Distributed situation awareness in complex collaborative systems: A field study of bridge operations on platform supply vessels.

    Science.gov (United States)

    Sandhåland, Hilde; Oltedal, Helle A; Hystad, Sigurd W; Eid, Jarle

    2015-06-01

    This study provides empirical data about shipboard practices in bridge operations on board a selection of platform supply vessels (PSVs). Using the theoretical concept of distributed situation awareness, the study examines how situation awareness (SA)-related information is distributed and coordinated at the bridge. This study thus favours a systems approach to studying SA, viewing it not as a phenomenon that solely happens in each individual's mind but rather as something that happens between individuals and the tools that they use in a collaborative system. Thus, this study adds to our understanding of SA as a distributed phenomenon. Data were collected in four field studies that lasted between 8 and 14 days on PSVs that operate on the Norwegian continental shelf and UK continental shelf. The study revealed pronounced variations in shipboard practices regarding how the bridge team attended to operational planning, communication procedures, and distracting/interrupting factors during operations. These findings shed new light on how SA might decrease in bridge teams during platform supply operations. The findings from this study emphasize the need to assess and establish shipboard practices that support the bridge teams' SA needs in day-to-day operations. Provides insights into how shipboard practices that are relevant to planning, communication and the occurrence of distracting/interrupting factors are realized in bridge operations.Notes possible areas for improvement to enhance distributed SA in bridge operations.

  16. Building for Biology: A Gene Therapy Trial Infrastructure

    Directory of Open Access Journals (Sweden)

    Samuel Taylor-Alexander

    2017-06-01

    Full Text Available In this article, we examine the construction of the infrastructure for a Phase II gene therapy trial for Cystic Fibrosis (CF. Tracing the development of the material technologies and physical spaces used in the trial, we show how the trial infrastructure took form at the uncertain intersection of scientific norms, built environments, regulatory negotiations, patienthood, and the biologies of both disease and therapy. We define infrastructures as material and immaterial (including symbols and affect composites that serve a selective distributive purpose and facilitate projects of making and doing. There is a politics to this distributive action, which is itself twofold, because whilst infrastructures enable and delimit the movement of matter, they also mediate the very activity for which they provide the grounds. An infrastructural focus allows us to show how purposeful connections are made in a context of epistemic and regulatory uncertainty. The gene therapy researchers were working in a context of multiple uncertainties, regarding not only how to do gene therapy, but also how to anticipate and enact ambiguous regulatory requirements in a context of limited resources (technical, spatial, and financial. At the same time, the trial infrastructure had to accommodate Cystic Fibrosis biology by bridging the gap between pathology and therapy. The consortium’s approach to treating CF required that they address concerns about contamination and safety while finding a way of getting a modified gene product into the lungs of the trial participants.

  17. What's My Lane? Identifying the State Government Role in Critical Infrastructure Protection

    OpenAIRE

    Donnelly, Timothy S.

    2012-01-01

    Approved for public release; distribution is unlimited What constitutes an effective Critical Infrastructure and Key Resources (CIKR) protection program for Massachusetts This study evaluates existing literature regarding CIKR to extrapolate an infrastructure protection role for Massachusetts. By reviewing historical events and government strategies regarding infrastructure protection, Chapters I and II will provide scope and context for issues surrounding critical infrastructure. Chapter ...

  18. An Experimental Platform for Autonomous Bus Development

    Directory of Open Access Journals (Sweden)

    Héctor Montes

    2017-11-01

    Full Text Available Nowadays, with highly developed instrumentation, sensing and actuation technologies, it is possible to foresee an important advance in the field of autonomous and/or semi-autonomous transportation systems. Intelligent Transport Systems (ITS have been subjected to very active research for many years, and Bus Rapid Transit (BRT is one area of major interest. Among the most promising transport infrastructures, the articulated bus is an interesting, low cost, high occupancy capacity and friendly option. In this paper, an experimental platform for research on the automatic control of an articulated bus is presented. The aim of the platform is to allow full experimentation in real conditions for testing technological developments and control algorithms. The experimental platform consists of a mobile component (a commercial articulated bus fully instrumented and a ground test area composed of asphalt roads inside the Consejo Superior de Investigaciones Científicas (CSIC premises. This paper focuses also on the development of a human machine interface to ease progress in control system evaluation. Some experimental results are presented in order to show the potential of the proposed platform.

  19. A virtual laboratory for micro-grid information and communication infrastructures

    OpenAIRE

    Weimer, James; Xu, Yuzhe; Fischione, Carlo; Johansson, Karl Henrik; Ljungberg, Per; Donovan, Craig; Sutor, Ariane; Fahlén, Lennart E.

    2012-01-01

    Testing smart grid information and communication (ICT) infrastructures is imperative to ensure that they meet industry requirements and standards and do not compromise the grid reliability. Within the micro-grid, this requires identifying and testing ICT infrastructures for communication between distributed energy resources, building, substations, etc. To evaluate various ICT infrastructures for micro-grid deployment, this work introduces the Virtual Micro-Grid Laboratory (VMGL) and provides ...

  20. YARP: Yet Another Robot Platform

    Directory of Open Access Journals (Sweden)

    Lorenzo Natale

    2008-11-01

    Full Text Available We describe YARP, Yet Another Robot Platform, an open-source project that encapsulates lessons from our experience in building humanoid robots. The goal of YARP is to minimize the effort devoted to infrastructure-level software development by facilitating code reuse, modularity and so maximize research-level development and collaboration. Humanoid robotics is a "bleeding edge" field of research, with constant flux in sensors, actuators, and processors. Code reuse and maintenance is therefore a significant challenge. We describe the main problems we faced and the solutions we adopted. In short, the main features of YARP include support for inter-process communication, image processing as well as a class hierarchy to ease code reuse across different hardware platforms. YARP is currently used and tested on Windows, Linux and QNX6 which are common operating systems used in robotics.

  1. Towards an advanced e-Infrastructure for Civil Protection applications: Research Strategies and Innovation Guidelines

    Science.gov (United States)

    Mazzetti, P.; Nativi, S.; Verlato, M.; Angelini, V.

    2009-04-01

    In the context of the EU co-funded project CYCLOPS (http://www.cyclops-project.eu) the problem of designing an advanced e-Infrastructure for Civil Protection (CP) applications has been addressed. As a preliminary step, some studies about European CP systems and operational applications were performed in order to define their specific system requirements. At a higher level it was verified that CP applications are usually conceived to map CP Business Processes involving different levels of processing including data access, data processing, and output visualization. At their core they usually run one or more Earth Science models for information extraction. The traditional approach based on the development of monolithic applications presents some limitations related to flexibility (e.g. the possibility of running the same models with different input data sources, or different models with the same data sources) and scalability (e.g. launching several runs for different scenarios, or implementing more accurate and computing-demanding models). Flexibility can be addressed adopting a modular design based on a SOA and standard services and models, such as OWS and ISO for geospatial services. Distributed computing and storage solutions could improve scalability. Basing on such considerations an architectural framework has been defined. It is made of a Web Service layer providing advanced services for CP applications (e.g. standard geospatial data sharing and processing services) working on the underlying Grid platform. This framework has been tested through the development of prototypes as proof-of-concept. These theoretical studies and proof-of-concept demonstrated that although Grid and geospatial technologies would be able to provide significant benefits to CP applications in terms of scalability and flexibility, current platforms are designed taking into account requirements different from CP. In particular CP applications have strict requirements in terms of: a) Real

  2. Optimized distribution network work management by grid services of a rapid loading infrastructure; Optimierte Verteilnetzbetriebsfuehrung durch Netzdienstleistungen einer Schnellladeinfrastruktur

    Energy Technology Data Exchange (ETDEWEB)

    Krasselt, P.; Uhrig, M.; Leibfried, T. [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany). Inst. fuer Elektroenergiesysteme und Hochspannungstechnik (IEH)

    2012-07-01

    The German Federal Government aims to reach one million electric vehicles in 2020 and up to five million by 2030 under its National Electromobility Development Plan. The integration of the necessary charging infrastructure in the distribution grid is considered in many research approaches by regarding charging time slots controlled by information and communications technology (ICT). In this approach, strategies for reactive power management and gridsupporting functions in medium voltage networks through the integration of large charging stations such as those in parking garages and public parking lots are considered. An urban distribution network in 2030 is modelled to evaluate different centralized and decentralized reactive power control schemes. (orig.)

  3. Participatory Infrastructuring of Community Energy

    DEFF Research Database (Denmark)

    Capaccioli, Andrea; Poderi, Giacomo; Bettega, Mela

    2016-01-01

    Thanks to renewable energies the decentralized energy system model is becoming more relevant in the production and distribution of energy. The scenario is important in order to achieve a successful energy transition. This paper presents a reflection on the ongoing experience of infrastructuring a...

  4. Nuclear platform research and development - 2008-09 highlights

    International Nuclear Information System (INIS)

    Sadhankar, R.R.

    2009-08-01

    The Nuclear Platform R and D Program has lead responsibility for the maintenance and further development of the CANDU intellectual property covering the safety, licensing and design basis for nuclear facilities. The Nuclear Platform R and D Program is part of the Research and Technology Operation (RTO) unit of AECL and is managed through the Research and Development division, which has responsibility for maintaining and enhancing the knowledge and technology base. The RTO is also responsible for managing AECL's nuclear facilities and infrastructure (including laboratories and R and D facilities), the nuclear waste management program and other legacy liabilities (e.g., decommissioning) to demonstrate and grow shareholder value. The Nuclear Platform also provides the technology base from which new products and services can be developed to meet customer needs (including ACR and commercial products and services). (author)

  5. Understanding the infrastructure of European Research Infrastructures

    DEFF Research Database (Denmark)

    Lindstrøm, Maria Duclos; Kropp, Kristoffer

    2017-01-01

    European Research Infrastructure Consortia (ERIC) are a new form of legal and financial framework for the establishment and operation of research infrastructures in Europe. Despite their scope, ambition, and novelty, the topic has received limited scholarly attention. This article analyses one ER....... It is also a promising theoretical framework for addressing the relationship between the ERIC construct and the large diversity of European Research Infrastructures.......European Research Infrastructure Consortia (ERIC) are a new form of legal and financial framework for the establishment and operation of research infrastructures in Europe. Despite their scope, ambition, and novelty, the topic has received limited scholarly attention. This article analyses one ERIC...... became an ERIC using the Bowker and Star’s sociology of infrastructures. We conclude that focusing on ERICs as a European standard for organising and funding research collaboration gives new insights into the problems of membership, durability, and standardisation faced by research infrastructures...

  6. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    Science.gov (United States)

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  7. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Anisenkov, A; Belov, S; Kaplin, V; Korol, A; Skovpen, K; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2012-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  8. SenSyF Experience on Integration of EO Services in a Generic, Cloud-Based EO Exploitation Platform

    Science.gov (United States)

    Almeida, Nuno; Catarino, Nuno; Gutierrez, Antonio; Grosso, Nuno; Andrade, Joao; Caumont, Herve; Goncalves, Pedro; Villa, Guillermo; Mangin, Antoine; Serra, Romain; Johnsen, Harald; Grydeland, Tom; Emsley, Stephen; Jauch, Eduardo; Moreno, Jose; Ruiz, Antonio

    2016-08-01

    SenSyF is a cloud-based data processing framework for EO- based services. It has been pioneer in addressing Big Data issues from the Earth Observation point of view, and is a precursor of several of the technologies and methodologies that will be deployed in ESA's Thematic Exploitation Platforms and other related systems.The SenSyF system focuses on developing fully automated data management, together with access to a processing and exploitation framework, including Earth Observation specific tools. SenSyF is both a development and validation platform for data intensive applications using Earth Observation data. With SenSyF, scientific, institutional or commercial institutions developing EO- based applications and services can take advantage of distributed computational and storage resources, tailored for applications dependent on big Earth Observation data, and without resorting to deep infrastructure and technological investments.This paper describes the integration process and the experience gathered from different EO Service providers during the project.

  9. Critical Infrastructures: Background, Policy, and Implementation

    National Research Council Canada - National Science Library

    Moteff, John D

    2005-01-01

    .... electricity, the power plants that generate it, and the electric grid upon which it is distributed). The national security community has been concerned for sometime about the vulnerability of critical infrastructure to both physical and cyber attack...

  10. Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS

    Science.gov (United States)

    Behnke, Jeanne; Lowe, Dawn; Lindsay, Francis; Lynnes, Chris; Mitchell, Andrew

    2016-01-01

    EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users.

  11. Enabling Real-Time Video Services over Ad-Hoc Networks Opens the Gates for E-learning in Areas Lacking Infrastructure

    Directory of Open Access Journals (Sweden)

    Johannes Karlsson

    2009-10-01

    Full Text Available In this paper we suggest a promising solution to come over the problems of delivering e-learning to areas with lack or deficiencies in infrastructure for Internet and mobile communication. We present a simple, reasonably priced and efficient communication platform for providing e-learning. This platform is based on wireless ad-hoc networks. We also present a preemptive routing protocol suitable for real-time video communication over wireless ad-hoc networks. Our results show that this routing protocol can significantly improve the quality of the received video. This makes our suggested system not only good to overcome the infrastructure barrier but even capable of delivering a high quality e-learning material.

  12. CONCEPTION OF THE ARDUINO PLATFORM AS A BASE FOR THE CONSTRUCTION OF DISTRIBUTED DIAGNOSTIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    Tomasz HANISZEWSKI

    2016-12-01

    Full Text Available Systems for distributed parameter measurements are very expensive solutions; however, they offer many possibilities in terms of real-time verification of machine status. Of course, ready, complex and easy-to-use measuring systems can be used, where the cost of such a solution may be prohibitive. In the case of research carried out under the experimental sphere of an object, e.g., using a research measurement system, it is possible to create a project for a system based mainly on the Arduino platform. As an example, the concept of a distributed measurement system will be presented, with the possibility for use on cranes and conveyors, i.e., on the most common machines on industrial plants.

  13. Ocean Data Interoperability Platform (ODIP): developing a common framework for marine data management on a global scale

    Science.gov (United States)

    Schaap, Dick M. A.; Glaves, Helen

    2016-04-01

    Europe, the USA, and Australia are making significant progress in facilitating the discovery, access and long term stewardship of ocean and marine data through the development, implementation, population and operation of national, regional or international distributed ocean and marine observing and data management infrastructures such as SeaDataNet, EMODnet, IOOS, R2R, and IMOS. All of these developments are resulting in the development of standards and services implemented and used by their regional communities. The Ocean Data Interoperability Platform (ODIP) project is supported by the EU FP7 Research Infrastructures programme, National Science Foundation (USA) and Australian government and has been initiated 1st October 2012. Recently the project has been continued as ODIP II for another 3 years with EU HORIZON 2020 funding. ODIP includes all the major organisations engaged in ocean data management in EU, US, and Australia. ODIP is also supported by the IOC-IODE, closely linking this activity with its Ocean Data Portal (ODP) and Ocean Data Standards Best Practices (ODSBP) projects. The ODIP platform aims to ease interoperability between the regional marine data management infrastructures. Therefore it facilitates an organised dialogue between the key infrastructure representatives by means of publishing best practice, organising a series of international workshops and fostering the development of common standards and interoperability solutions. These are evaluated and tested by means of prototype projects. The presentation will give further background on the ODIP projects and the latest information on the progress of three prototype projects addressing: 1. establishing interoperability between the regional EU, USA and Australia data discovery and access services (SeaDataNet CDI, US NODC, and IMOS MCP) and contributing to the global GEOSS and IODE-ODP portals; 2. establishing interoperability between cruise summary reporting systems in Europe, the USA and

  14. Earth Observation-Supported Service Platform for the Development and Provision of Thematic Information on the Built Environment - the Tep-Urban Project

    Science.gov (United States)

    Esch, T.; Asamer, H.; Boettcher, M.; Brito, F.; Hirner, A.; Marconcini, M.; Mathot, E.; Metz, A.; Permana, H.; Soukop, T.; Stanek, F.; Kuchar, S.; Zeidler, J.; Balhar, J.

    2016-06-01

    The Sentinel fleet will provide a so-far unique coverage with Earth observation data and therewith new opportunities for the implementation of methodologies to generate innovative geo-information products and services. It is here where the TEP Urban project is supposed to initiate a step change by providing an open and participatory platform based on modern ICT technologies and services that enables any interested user to easily exploit Earth observation data pools, in particular those of the Sentinel missions, and derive thematic information on the status and development of the built environment from these data. Key component of TEP Urban project is the implementation of a web-based platform employing distributed high-level computing infrastructures and providing key functionalities for i) high-performance access to satellite imagery and derived thematic data, ii) modular and generic state-of-the art pre-processing, analysis, and visualization techniques, iii) customized development and dissemination of algorithms, products and services, and iv) networking and communication. This contribution introduces the main facts about the TEP Urban project, including a description of the general objectives, the platform systems design and functionalities, and the preliminary portfolio products and services available at the TEP Urban platform.

  15. ZIVIS: A City Computing Platform Based on Volunteer Computing

    International Nuclear Information System (INIS)

    Antoli, B.; Castejon, F.; Giner, A.; Losilla, G.; Reynolds, J. M.; Rivero, A.; Sangiao, S.; Serrano, F.; Tarancon, A.; Valles, R.; Velasco, J. L.

    2007-01-01

    Abstract Volunteer computing has come up as a new form of distributed computing. Unlike other computing paradigms like Grids, which use to be based on complex architectures, volunteer computing has demonstrated a great ability to integrate dispersed, heterogeneous computing resources with ease. This article presents ZIVIS, a project which aims to deploy a city-wide computing platform in Zaragoza (Spain). ZIVIS is based on BOINC (Berkeley Open Infrastructure for Network Computing), a popular open source framework to deploy volunteer and desktop grid computing systems. A scientific code which simulates the trajectories of particles moving inside a stellarator fusion device, has been chosen as the pilot application of the project. In this paper we describe the approach followed to port the code to the BOINC framework as well as some novel techniques, based on standard Grid protocols, we have used to access the output data present in the BOINC server from a remote visualizer. (Author)

  16. Complete distributed computing environment for a HEP experiment: experience with ARC-connected infrastructure for ATLAS

    International Nuclear Information System (INIS)

    Read, A; Taga, A; O-Saada, F; Pajchel, K; Samset, B H; Cameron, D

    2008-01-01

    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation

  17. Complete distributed computing environment for a HEP experiment: experience with ARC-connected infrastructure for ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Read, A; Taga, A; O-Saada, F; Pajchel, K; Samset, B H; Cameron, D [Department of Physics, University of Oslo, P.b. 1048 Blindern, N-0316 Oslo (Norway)], E-mail: a.l.read@fys.uio.no

    2008-07-15

    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation.

  18. Improving linear transport infrastructure efficiency by automated learning and optimised predictive maintenance techniques (INFRALERT)

    Science.gov (United States)

    Jiménez-Redondo, Noemi; Calle-Cordón, Alvaro; Kandler, Ute; Simroth, Axel; Morales, Francisco J.; Reyes, Antonio; Odelius, Johan; Thaduri, Aditya; Morgado, Joao; Duarte, Emmanuele

    2017-09-01

    The on-going H2020 project INFRALERT aims to increase rail and road infrastructure capacity in the current framework of increased transportation demand by developing and deploying solutions to optimise maintenance interventions planning. It includes two real pilots for road and railways infrastructure. INFRALERT develops an ICT platform (the expert-based Infrastructure Management System, eIMS) which follows a modular approach including several expert-based toolkits. This paper presents the methodologies and preliminary results of the toolkits for i) nowcasting and forecasting of asset condition, ii) alert generation, iii) RAMS & LCC analysis and iv) decision support. The results of these toolkits in a meshed road network in Portugal under the jurisdiction of Infraestruturas de Portugal (IP) are presented showing the capabilities of the approaches.

  19. Stratospheric Platforms for Monitoring Purposes

    International Nuclear Information System (INIS)

    Konigorski, D.; Gratzel, U.; Obersteiner, M.; Schneidereit, M.

    2010-01-01

    ) incrementally, e.g. by deploying additional High Altitude Platforms (HAPs); Ability to service / update / reconfigure payload; Close range - for both monitoring and communications: 1. Monitoring from e.g. 20 km distance, compared with about 450 km for a Low Earth Orbit (LEO) satellite. Furthermore the platforms facilitate very high spatial resolution. 2. Extremely high overall payload and power capacity and can replace extensive ground infrastructure (e.g. telecoms masts); Stratospheric platforms can also make use of low temperatures (superconducting), large antennas (high resolution), less atmosphere (high power microwave) and energy collection and transmission (electric power support of high altitude UAVs). The paper will discuss the potential of stratospheric platforms for safeguards applications. (author)

  20. A technological infrastructure to sustain Internetworked Enterprises

    Science.gov (United States)

    La Mattina, Ernesto; Savarino, Vincenzo; Vicari, Claudia; Storelli, Davide; Bianchini, Devis

    In the Web 3.0 scenario, where information and services are connected by means of their semantics, organizations can improve their competitive advantage by publishing their business and service descriptions. In this scenario, Semantic Peer to Peer (P2P) can play a key role in defining dynamic and highly reconfigurable infrastructures. Organizations can share knowledge and services, using this infrastructure to move towards value networks, an emerging organizational model characterized by fluid boundaries and complex relationships. This chapter collects and defines the technological requirements and architecture of a modular and multi-Layer Peer to Peer infrastructure for SOA-based applications. This technological infrastructure, based on the combination of Semantic Web and P2P technologies, is intended to sustain Internetworked Enterprise configurations, defining a distributed registry and enabling more expressive queries and efficient routing mechanisms. The following sections focus on the overall architecture, while describing the layers that form it.

  1. ACTRIS Aerosol, Clouds and Trace Gases Research Infrastructure

    OpenAIRE

    Pappalardo Gelsomina

    2018-01-01

    The Aerosols, Clouds and Trace gases Research Infrastructure (ACTRIS) is a distributed infrastructure dedicated to high-quality observation of aerosols, clouds, trace gases and exploration of their interactions. It will deliver precision data, services and procedures regarding the 4D variability of clouds, short-lived atmospheric species and the physical, optical and chemical properties of aerosols to improve the current capacity to analyse, understand and predict past, current and future evo...

  2. LLVM Infrastructure and Tools Project Summary

    Energy Technology Data Exchange (ETDEWEB)

    McCormick, Patrick Sean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-06

    This project works with the open source LLVM Compiler Infrastructure (http://llvm.org) to provide tools and capabilities that address needs and challenges faced by ECP community (applications, libraries, and other components of the software stack). Our focus is on providing a more productive development environment that enables (i) improved compilation times and code generation for parallelism, (ii) additional features/capabilities within the design and implementations of LLVM components for improved platform/performance portability and (iii) improved aspects related to composition of the underlying implementation details of the programming environment, capturing resource utilization, overheads, etc. -- including runtime systems that are often not easily addressed by application and library developers.

  3. Proba-V Mission Exploitation Platform

    Science.gov (United States)

    Goor, Erwin; Dries, Jeroen

    2017-04-01

    be available as well soon. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to the complete data archive. To realise this, private cloud technology (openStack) is used and a distributed processing environment is built based on Hadoop. The Hadoop ecosystem offers a lot of technologies (Spark, Yarn, Accumulo, etc.) which we integrate with several open-source components (e.g. Geotrellis). The impact of this MEP on the user community will be high and will completely change the way of working with the data and hence open the large time series to a larger community of users. The presentation will address these benefits for the users and discuss on the technical challenges in implementing this MEP. Furthermore demonstrations will be done. Platform URL: https://proba-v-mep.esa.int/

  4. Chiaro Networks' Enstara IP/MPLS platform selected by CERN for trans-Atlantic trial

    CERN Multimedia

    2004-01-01

    "Chiaro Networks, the developer of true infrastructure-class Internet Protocol/Multi-Protocol Label Switching (IP/MPLS) platforms, today announced that its Enstara router has been selected by the European Organization for Nuclear Research (CERN) for its DataTAG project" (1 page)

  5. Seqcrawler: biological data indexing and browsing platform.

    Science.gov (United States)

    Sallou, Olivier; Bretaudeau, Anthony; Roult, Aurelien

    2012-07-24

    Seqcrawler takes its roots in software like SRS or Lucegene. It provides an indexing platform to ease the search of data and meta-data in biological banks and it can scale to face the current flow of data. While many biological bank search tools are available on the Internet, mainly provided by large organizations to search their data, there is a lack of free and open source solutions to browse one's own set of data with a flexible query system and able to scale from a single computer to a cloud system. A personal index platform will help labs and bioinformaticians to search their meta-data but also to build a larger information system with custom subsets of data. The software is scalable from a single computer to a cloud-based infrastructure. It has been successfully tested in a private cloud with 3 index shards (pieces of index) hosting ~400 millions of sequence information (whole GenBank, UniProt, PDB and others) for a total size of 600 GB in a fault tolerant architecture (high-availability). It has also been successfully integrated with software to add extra meta-data from blast results to enhance users' result analysis. Seqcrawler provides a complete open source search and store solution for labs or platforms needing to manage large amount of data/meta-data with a flexible and customizable web interface. All components (search engine, visualization and data storage), though independent, share a common and coherent data system that can be queried with a simple HTTP interface. The solution scales easily and can also provide a high availability infrastructure.

  6. Seqcrawler: biological data indexing and browsing platform

    Directory of Open Access Journals (Sweden)

    Sallou Olivier

    2012-07-01

    Full Text Available Abstract Background Seqcrawler takes its roots in software like SRS or Lucegene. It provides an indexing platform to ease the search of data and meta-data in biological banks and it can scale to face the current flow of data. While many biological bank search tools are available on the Internet, mainly provided by large organizations to search their data, there is a lack of free and open source solutions to browse one’s own set of data with a flexible query system and able to scale from a single computer to a cloud system. A personal index platform will help labs and bioinformaticians to search their meta-data but also to build a larger information system with custom subsets of data. Results The software is scalable from a single computer to a cloud-based infrastructure. It has been successfully tested in a private cloud with 3 index shards (pieces of index hosting ~400 millions of sequence information (whole GenBank, UniProt, PDB and others for a total size of 600 GB in a fault tolerant architecture (high-availability. It has also been successfully integrated with software to add extra meta-data from blast results to enhance users’ result analysis. Conclusions Seqcrawler provides a complete open source search and store solution for labs or platforms needing to manage large amount of data/meta-data with a flexible and customizable web interface. All components (search engine, visualization and data storage, though independent, share a common and coherent data system that can be queried with a simple HTTP interface. The solution scales easily and can also provide a high availability infrastructure.

  7. ARIADNE: A Research Infrastructure for Archaeology

    NARCIS (Netherlands)

    Hollander, H.S.; Meghini, Carlo; Scopigno, Roberto; Richards, Julian; Wright, Holly; Geser, Guntram; Cuy, Sebastian; Fihn, Johan; Fanini, Bruno; Niccolucci, Franco; Felicetti, Achille; Ronzino, Paola; Nurra, Federico; Papatheodorou, Christos; Gavrilis, Dimitris; Theodoridou, Maria; Doerr, Martin; Tudhope, Douglas; Binding, Ceri; Vlachidis, Andreas

    Research e-infrastructures, digital archives and data services have become important pillars of scientific enterprise that in recent decades has become ever more collaborative, distributed and data-intensive. The archaeological research community has been an early adopter of digital tools for data

  8. Cloud-Based Software Platform for Smart Meter Data Management

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Nielsen, Per Sieverts

    of the so-called big data possible. This can improve energy management, e.g., help utility companies to forecast energy loads and improve services, and help households to manage energy usage and save money. As this regard, the proposed paper focuses on building an innovative software platform for smart...... their knowledge; scalable data analytics platform for data mining over big data sets for energy demand forecasting and consumption discovering; data as the service for other applications using smart meter data; and a portal for visualizing data analytics results. The design will incorporate hybrid clouds......, including Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), which are suitable for on-demand provisioning, massive scaling, and manageability. Besides, the design will impose extensibility, eciency, and high availability on the system. The paper will evaluate the system comprehensively...

  9. The design and implementation of an infrastructure for multimedia digital libraries

    NARCIS (Netherlands)

    de Vries, A.P.; Eberman, B.; Kovalcin, D.E.

    We develop an infrastructure for managing, indexing and serving multimedia content in digital libraries. This infrastructure follows the model of the web, and thereby is distributed in nature. We discuss the design of the Librarian, the component that manages meta data about the content. The

  10. Information-computational platform for collaborative multidisciplinary investigations of regional climatic changes and their impacts

    Science.gov (United States)

    Gordov, Evgeny; Lykosov, Vasily; Krupchatnikov, Vladimir; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara

    2013-04-01

    Analysis of growing volume of related to climate change data from sensors and model outputs requires collaborative multidisciplinary efforts of researchers. To do it timely and in reliable way one needs in modern information-computational infrastructure supporting integrated studies in the field of environmental sciences. Recently developed experimental software and hardware platform Climate (http://climate.scert.ru/) provides required environment for regional climate change related investigations. The platform combines modern web 2.0 approach, GIS-functionality and capabilities to run climate and meteorological models, process large geophysical datasets and support relevant analysis. It also supports joint software development by distributed research groups, and organization of thematic education for students and post-graduate students. In particular, platform software developed includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also run of integrated into the platform WRF and «Planet Simulator» models, modeling results data preprocessing and visualization is provided. All functions of the platform are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of selection of geographical region of interest (pan and zoom), data layers manipulation (order, enable/disable, features extraction) and visualization of results. Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches. Using it even unskilled user without specific knowledge can perform reliable computational processing and visualization of large meteorological, climatic and satellite monitoring datasets through

  11. Infrastructure Requirements for an Expanded Fuel Ethanol Industry

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, Robert E. [Downstream Alternatives, Inc., South Bend, IN (United States)

    2002-01-15

    This report provides technical information specifically related to ethanol transportation, distribution, and marketing issues. This report required analysis of the infrastructure requirements for an expanded ethanol industry.

  12. ACTRIS Aerosol, Clouds and Trace Gases Research Infrastructure

    Science.gov (United States)

    Pappalardo, Gelsomina

    2018-04-01

    The Aerosols, Clouds and Trace gases Research Infrastructure (ACTRIS) is a distributed infrastructure dedicated to high-quality observation of aerosols, clouds, trace gases and exploration of their interactions. It will deliver precision data, services and procedures regarding the 4D variability of clouds, short-lived atmospheric species and the physical, optical and chemical properties of aerosols to improve the current capacity to analyse, understand and predict past, current and future evolution of the atmospheric environment.

  13. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    Science.gov (United States)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  14. Concept of a spatial data infrastructure for web-mapping, processing and service provision for geo-hazards

    Science.gov (United States)

    Weinke, Elisabeth; Hölbling, Daniel; Albrecht, Florian; Friedl, Barbara

    2017-04-01

    Geo-hazards and their effects are distributed geographically over wide regions. The effective mapping and monitoring is essential for hazard assessment and mitigation. It is often best achieved using satellite imagery and new object-based image analysis approaches to identify and delineate geo-hazard objects (landslides, floods, forest fires, storm damages, etc.). At the moment, several local/national databases and platforms provide and publish data of different types of geo-hazards as well as web-based risk maps and decision support systems. Also, the European commission implemented the Copernicus Emergency Management Service (EMS) in 2015 that publishes information about natural and man-made disasters and risks. Currently, no platform for landslides or geo-hazards as such exists that enables the integration of the user in the mapping and monitoring process. In this study we introduce the concept of a spatial data infrastructure for object delineation, web-processing and service provision of landslide information with the focus on user interaction in all processes. A first prototype for the processing and mapping of landslides in Austria and Italy has been developed within the project Land@Slide, funded by the Austrian Research Promotion Agency FFG in the Austrian Space Applications Program ASAP. The spatial data infrastructure and its services for the mapping, processing and analysis of landslides can be extended to other regions and to all types of geo-hazards for analysis and delineation based on Earth Observation (EO) data. The architecture of the first prototypical spatial data infrastructure includes four main areas of technical components. The data tier consists of a file storage system and the spatial data catalogue for the management of EO-data, other geospatial data on geo-hazards, as well as descriptions and protocols for the data processing and analysis. An interface to extend the data integration from external sources (e.g. Sentinel-2 data) is planned

  15. Developing research career indicators using open data: the RISIS infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Cañibano, C.; Woolley, R.; Iversen, E.; Hinze, S.; Hornbostel, S.; Tesch, J.

    2016-07-01

    This paper introduces the research infrastructure for rsearch and innovation policy studies (RISIS) and its ongoing work on the development of indicators for research careers. The paper first describes the rationale for developing an information system on research careers. It then uses and example to demonstratate the possibilities arising from aggregating open data from different datasets within the RISIS platform to create new information and monitoring possibilies with regard to research careers. (Author)

  16. The computing and data infrastructure to interconnect EEE stations

    Science.gov (United States)

    Noferini, F.; EEE Collaboration

    2016-07-01

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  17. The computing and data infrastructure to interconnect EEE stations

    Energy Technology Data Exchange (ETDEWEB)

    Noferini, F., E-mail: noferini@bo.infn.it [Museo Storico della Fisica e Centro Studi e Ricerche “Enrico Fermi”, Rome (Italy); INFN CNAF, Bologna (Italy)

    2016-07-11

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  18. The computing and data infrastructure to interconnect EEE stations

    International Nuclear Information System (INIS)

    Noferini, F.

    2016-01-01

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  19. Federated data storage and management infrastructure

    International Nuclear Information System (INIS)

    Zarochentsev, A; Kiryanov, A; Klimentov, A; Krasnopevtsev, D; Hristov, P

    2016-01-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics. (paper)

  20. International Conference of Applied Science and Technology for Infrastructure Engineering

    Science.gov (United States)

    Elvina Santoso, Shelvy; Hardianto, Ekky

    2017-11-01

    Preface: International Conference of Applied Science and Technology for Infrastructure Engineering (ICASIE) 2017. The International Conference of Applied Science and Technology for Infrastructure Engineering (ICASIE) 2017 has been scheduled and successfully taken place at Swiss-Bell Inn Hotel, Surabaya, Indonesia, on August 5th 2017 organized by Department of Civil Infrastructure Engineering, Faculty of Vocation, Institut Teknologi Sepuluh Nopember (ITS). This annual event aims to create synergies between government, private sectors; employers; practitioners; and academics. This conference has different theme each year and “MATERIAL FOR INFRASTUCTURE ENGINEERING” will be taken for this year’s main theme. In addition, we also provide a platform for various other sub-theme topic including but not limited to Geopolymer Concrete and Materials Technology, Structural Dynamics, Engineering, and Sustainability, Seismic Design and Control of Structural Vibrations, Innovative and Green Buildings, Project Management, Transportation and Highway Engineering, Geotechnical Engineering, Water Engineering and Resources Management, Surveying and Geospatial Engineering, Coastal Engineering, Geophysics, Energy, Electronic and Mechatronic, Industrial Process, and Data Mining. List of Organizers, Journal Editors, Steering Committee, International Scientific Committee, Chairman, Keynote Speakers are available in this pdf.

  1. Infrastructure: A technology battlefield in the 21st century

    Energy Technology Data Exchange (ETDEWEB)

    Drucker, H.

    1997-12-31

    A major part of technological advancement has involved the development of complex infrastructure systems, including electric power generation, transmission, and distribution networks; oil and gas pipeline systems; highway and rail networks; and telecommunication networks. Dependence on these infrastructure systems renders them attractive targets for conflict in the twenty-first century. Hostile governments, domestic and international terrorists, criminals, and mentally distressed individuals will inevitably find some part of the infrastructure an easy target for theft, for making political statements, for disruption of strategic activities, or for making a nuisance. The current situation regarding the vulnerability of the infrastructure can be summarized in three major points: (1) our dependence on technology has made our infrastructure more important and vital to our everyday lives, this in turn, makes us much more vulnerable to disruption in any infrastructure system; (2) technologies available for attacking infrastructure systems have changed substantially and have become much easier to obtain and use, easy accessibility to information on how to disrupt or destroy various infrastructure components means that almost anyone can be involved in this destructive process; (3) technologies for defending infrastructure systems and preventing damage have not kept pace with the capability for destroying such systems. A brief review of these points will illustrate the significance of infrastructure and the growing dangers to its various elements.

  2. Zone-Aware Service Platform: A New Concept of Context-Aware Networking and Communications for Smart-Home Sustainability

    Directory of Open Access Journals (Sweden)

    Jinsung Byun

    2018-01-01

    Full Text Available Recent advances in networking and communications removed the restrictions of time and space in information services. Context-aware service systems can support the predefined services in accordance with user requests regardless of time and space. However, due to their architectural limitations, the recent systems are not so flexible to provide device-independent services by multiple service providers. Recently, researchers have focused on a new service paradigm characterized by high mobility, service continuity, and green characteristics. In line with these efforts, improved context-aware service platforms have been suggested to make the platform possible to manage the contexts to provide the adaptive services for multi-user and locations. However, this platform can only support limited continuity and mobility. In other words, the existing system cannot support seamless service provision among different service providers with respect to the changes of mobility, situation, device, and network. Furthermore, the existing context-aware service platform is significant reliance on always-on infrastructure, which leads to great amounts of energy consumption inevitably. Therefore, we subsequently propose a new concept of context-aware networking and communications, namely a zone-aware service platform. The proposed platform autonomously reconfigures the infrastructure and maintains a service session interacting with the middleware to support cost- and energy-efficient pervasive services for smart-home sustainability.

  3. Building analytical platform with Big Data solutions for log files of PanDA infrastructure

    Science.gov (United States)

    Alekseev, A. A.; Barreiro Megino, F. G.; Klimentov, A. A.; Korchuganova, T. A.; Maendo, T.; Padolski, S. V.

    2018-05-01

    The paper describes the implementation of a high-performance system for the processing and analysis of log files for the PanDA infrastructure of the ATLAS experiment at the Large Hadron Collider (LHC), responsible for the workload management of order of 2M daily jobs across the Worldwide LHC Computing Grid. The solution is based on the ELK technology stack, which includes several components: Filebeat, Logstash, ElasticSearch (ES), and Kibana. Filebeat is used to collect data from logs. Logstash processes data and export to Elasticsearch. ES are responsible for centralized data storage. Accumulated data in ES can be viewed using a special software Kibana. These components were integrated with the PanDA infrastructure and replaced previous log processing systems for increased scalability and usability. The authors will describe all the components and their configuration tuning for the current tasks, the scale of the actual system and give several real-life examples of how this centralized log processing and storage service is used to showcase the advantages for daily operations.

  4. An E-government Interoperability Platform Supporting Personal Data Protection Regulations

    Directory of Open Access Journals (Sweden)

    Laura González

    2016-08-01

    Full Text Available Public agencies are increasingly required to collaborate with each other in order to provide high-quality e-government services. This collaboration is usually based on the service-oriented approach and supported by interoperability platforms. Such platforms are specialized middleware-based infrastructures enabling the provision, discovery and invocation of interoperable software services. In turn, given that personal data handled by governments are often very sensitive, most governments have developed some sort of legislation focusing on data protection. This paper proposes solutions for monitoring and enforcing data protection laws within an E-government Interoperability Platform. In particular, the proposal addresses requirements posed by the Uruguayan Data Protection Law and the Uruguayan E-government Platform, although it can also be applied in similar scenarios. The solutions are based on well-known integration mechanisms (e.g. Enterprise Service Bus as well as recognized security standards (e.g. eXtensible Access Control Markup Language and were completely prototyped leveraging the SwitchYard ESB product.

  5. Assessment of online public opinions on large infrastructure projects: A case study of the Three Gorges Project in China

    International Nuclear Information System (INIS)

    Jiang, Hanchen; Qiang, Maoshan; Lin, Peng

    2016-01-01

    Public opinion becomes increasingly salient in the ex post evaluation stage of large infrastructure projects which have significant impacts to the environment and the society. However, traditional survey methods are inefficient in collection and assessment of the public opinion due to its large quantity and diversity. Recently, Social media platforms provide a rich data source for monitoring and assessing the public opinion on controversial infrastructure projects. This paper proposes an assessment framework to transform unstructured online public opinions on large infrastructure projects into sentimental and topical indicators for enhancing practices of ex post evaluation and public participation. The framework uses web crawlers to collect online comments related to a large infrastructure project and employs two natural language processing technologies, including sentiment analysis and topic modeling, with spatio-temporal analysis, to transform these comments into indicators for assessing online public opinion on the project. Based on the framework, we investigate the online public opinion of the Three Gorges Project on China's largest microblogging site, namely, Weibo. Assessment results present spatial-temporal distributions of post intensity and sentiment polarity, reveals major topics with different sentiments and summarizes managerial implications, for ex post evaluation of the world's largest hydropower project. The proposed assessment framework is expected to be widely applied as a methodological strategy to assess public opinion in the ex post evaluation stage of large infrastructure projects. - Highlights: • We developed a framework to assess online public opinion on large infrastructure projects with environmental impacts. • Indicators were built to assess post intensity, sentiment polarity and major topics of the public opinion. • We took the Three Gorges Project (TGP) as an example to demonstrate the effectiveness proposed framework.

  6. Assessment of online public opinions on large infrastructure projects: A case study of the Three Gorges Project in China

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Hanchen, E-mail: jhc13@mails.tsinghua.edu.cn; Qiang, Maoshan, E-mail: qiangms@tsinghua.edu.cn; Lin, Peng, E-mail: celinpe@mail.tsinghua.edu.cn

    2016-11-15

    Public opinion becomes increasingly salient in the ex post evaluation stage of large infrastructure projects which have significant impacts to the environment and the society. However, traditional survey methods are inefficient in collection and assessment of the public opinion due to its large quantity and diversity. Recently, Social media platforms provide a rich data source for monitoring and assessing the public opinion on controversial infrastructure projects. This paper proposes an assessment framework to transform unstructured online public opinions on large infrastructure projects into sentimental and topical indicators for enhancing practices of ex post evaluation and public participation. The framework uses web crawlers to collect online comments related to a large infrastructure project and employs two natural language processing technologies, including sentiment analysis and topic modeling, with spatio-temporal analysis, to transform these comments into indicators for assessing online public opinion on the project. Based on the framework, we investigate the online public opinion of the Three Gorges Project on China's largest microblogging site, namely, Weibo. Assessment results present spatial-temporal distributions of post intensity and sentiment polarity, reveals major topics with different sentiments and summarizes managerial implications, for ex post evaluation of the world's largest hydropower project. The proposed assessment framework is expected to be widely applied as a methodological strategy to assess public opinion in the ex post evaluation stage of large infrastructure projects. - Highlights: • We developed a framework to assess online public opinion on large infrastructure projects with environmental impacts. • Indicators were built to assess post intensity, sentiment polarity and major topics of the public opinion. • We took the Three Gorges Project (TGP) as an example to demonstrate the effectiveness proposed framework.

  7. ACTRIS Aerosol, Clouds and Trace Gases Research Infrastructure

    Directory of Open Access Journals (Sweden)

    Pappalardo Gelsomina

    2018-01-01

    Full Text Available The Aerosols, Clouds and Trace gases Research Infrastructure (ACTRIS is a distributed infrastructure dedicated to high-quality observation of aerosols, clouds, trace gases and exploration of their interactions. It will deliver precision data, services and procedures regarding the 4D variability of clouds, short-lived atmospheric species and the physical, optical and chemical properties of aerosols to improve the current capacity to analyse, understand and predict past, current and future evolution of the atmospheric environment.

  8. Models of Financing and Available Financial Resources for Transport Infrastructure Projects

    Directory of Open Access Journals (Sweden)

    O. Pokorná

    2001-01-01

    Full Text Available A typical feature of transport infrastructure projects is that they are expensive and take a long time to construct. Transport infrastructure financing has traditionally lain in the public domain. A tightening of many countries' budgets in recent times has led to an exploration of alternative resources for financing transport infrastructures. A variety of models and methods can be used in transport infrastructure project financing. The selection of the appropriate model should be done taking into account not only financial resources but also the distribution of construction and operating risks and the contractual relations between the stakeholders.

  9. Security infrastructure for dynamically provisioned cloud infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; de Laat, C.; Lopez, D.R.; Morales, A.; García-Espín, J.A.; Pearson, S.; Yee, G.

    2013-01-01

    This chapter discusses conceptual issues, basic requirements and practical suggestions for designing dynamically configured security infrastructure provisioned on demand as part of the cloud-based infrastructure. This chapter describes general use cases for provisioning cloud infrastructure services

  10. A GIS-based assessment of coal-based hydrogen infrastructure deployment in the state of Ohio

    International Nuclear Information System (INIS)

    Johnson, Nils; Yang, Christopher; Ogden, Joan

    2008-01-01

    Hydrogen infrastructure costs will vary by region as geographic characteristics and feedstocks differ. This paper proposes a method for optimizing regional hydrogen infrastructure deployment by combining detailed spatial data in a geographic information system (GIS) with a technoeconomic model of hydrogen infrastructure components. The method is applied to a case study in Ohio in which coal-based hydrogen infrastructure with carbon capture and storage (CCS) is modeled for two distribution modes at several steady-state hydrogen vehicle market penetration levels. The paper identifies the optimal infrastructure design at each market penetration as well as the costs, CO 2 emissions, and energy use associated with each infrastructure pathway. The results indicate that aggregating infrastructure at the regional-scale yields lower levelized costs of hydrogen than at the city-level at a given market penetration level, and centralized production with pipeline distribution is the favored pathway even at low market penetration. Based upon the hydrogen infrastructure designs evaluated in this paper, coal-based hydrogen production with CCS can significantly reduce transportation-related CO 2 emissions at a relatively low infrastructure cost and levelized fuel cost. (author)

  11. Getting the Most from Distributed Resources With an Analytics Platform for ATLAS Computing Services

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225336; The ATLAS collaboration; Gardner, Robert; Bryant, Lincoln

    2016-01-01

    To meet a sharply increasing demand for computing resources for LHC Run 2, ATLAS distributed computing systems reach far and wide to gather CPU resources and storage capacity to execute an evolving ecosystem of production and analysis workflow tools. Indeed more than a hundred computing sites from the Worldwide LHC Computing Grid, plus many “opportunistic” facilities at HPC centers, universities, national laboratories, and public clouds, combine to meet these requirements. These resources have characteristics (such as local queuing availability, proximity to data sources and target destinations, network latency and bandwidth capacity, etc.) affecting the overall processing efficiency and throughput. To quantitatively understand and in some instances predict behavior, we have developed a platform to aggregate, index (for user queries), and analyze the more important information streams affecting performance. These data streams come from the ATLAS production system (PanDA), the distributed data management s...

  12. The StratusLab cloud distribution: Use-cases and support for scientific applications

    Science.gov (United States)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take

  13. The Urban Exploitation Platform - An instrument for the global provision of indicators related to sustainable cities and communities

    Science.gov (United States)

    Esch, Thomas; Asamer, Hubert; Hirner, Andreas; Marconcini, Mattia; Metz, Annekatrin; Uereyen, Soner; Zeidler, Julian; Boettcher, Martin; Permana, Hans; Boissier, Enguerran; Mathot, Emmanuel; Soukop, Tomas; Balhar, Jakub; Svaton, Vaclav; Kuchar, Stepan

    2017-04-01

    The Sentinel fleet will provide a so-far unique coverage with Earth Observation (EO) data and therewith new opportunities for the implementation of methodologies to generate innovative geo-information products and services supporting the SDG targets. It is here where the TEP Urban project is supposed to initiate a step change by providing an open and participatory platform that allows any interested user to easily exploit large-volume EO data pools, in particular those of the European Sentinel and the US Landsat missions, and derive thematic geo-information, metrics and indicators related to the status and development of the built environment. Key component of TEP Urban initiative is the implementation of a web-based platform (https://urban-tep.eo.esa.int) employing distributed high-level computing infrastructures and providing key functionalities for i) high-performance access to satellite imagery and other data sources such as statistics or topographic data, ii) state-of-the-art pre-processing, analysis, and visualization techniques, iii) customized development and dissemination of algorithms, products and services, and iv) networking and communication. This contribution introduces the main facts about the TEP Urban platform, including a description of the general objectives, the platform systems design and functionalities, and the available portfolio of products and services that can directly serve the global provision of indicators for SDG targets, in particular related to SDG 11.

  14. FORMATION INNOVATIVELY FOCUSED INFRASTRUCTURE OF THE GRAIN MARKET

    Directory of Open Access Journals (Sweden)

    D. S. Latynin

    2014-01-01

    Full Text Available Summary. The perspective scheme of infrastructure of the modern grain market is directed on perfection merchandising grains by means of liquidation of is material disproportions between its participants for decrease in logistical costs counting upon 1 t grains, and creations of the alternative organized channel merchandising, providing a direct output on the wholesale market of direct commodity producers of grain and their participation in distribution of profit received from export. Elimination of is material disproportions on all circuit passage of grain from the supplier of production up to the end user is necessary for connecting with the organization merchandising on principles of logistics. It will allow to ensure the general synergistic effect exceeding total effect at separate participants of a circuit. The structure of Association participants of the grain market, is directed on creation mutual interest by a deepening specialization of each participant merchandising, consolidations of their investment resources to development of this circuit, to decrease in logistical costs. Feature of the modern period functioning of the grain market is necessity acceleration of scientific and technical progress on the basis of innovative processes. Innovative activity causes necessity of faster development of an infrastructure of the grain market. One directions promotion of innovations is development in region techno park formations. Their advantage consists in an opportunity initiators of new technologies independently to carry out their scientific and design development and to advance a grain husbandry through commercialization and a transfer. With a view modernization of a regional infrastructure of the grain market in modern conditions creation electronic trading platform, introduction system of electronic commerce is extremely actual. By means of electronic technologies economic attitudes in the market essentially change, giving to them scale

  15. Qbox-Services: Towards a Service-Oriented Quality Platform

    Science.gov (United States)

    González, Laura; Peralta, Verónika; Bouzeghoub, Mokrane; Ruggia, Raúl

    The data quality market is characterized by a sparse offer of tools, providing individual functionalities which have their own interest with respect to quality assessment. But interoperating among these tools remains a technical challenge because of the heterogeneity of their models and access patterns. On the other side, quality analysts require more and more integration facilities that allow them to consolidate and aggregate multiple quality measures acquired from different observations. The QBox platform, developed within the ANR Quadris project, aims at filling this gap by supplying a service-based integration infrastructure that allows interoperability among several quality tools and provides an OLAP-based quality model to support multidimensional analysis. This paper focuses on the architectural principles of this infrastructure and illustrates its use through specific examples of quality services.

  16. Health-e-Child a grid platform for european paediatrics

    CERN Document Server

    Skaburskas, K; Shade, J; Manset, D; Revillard, J; Rios, A; Anjum, A; Branson, A; Bloodsworth, P; Hauer, T; McClatchey, R; Rogulin, D

    2008-01-01

    The Health-e-Child (HeC) project [1], [2] is an EC Framework Programme 6 Integrated Project that aims to develop a grid-based integrated healthcare platform for paediatrics. Using this platform biomedical informaticians will integrate heterogeneous data and perform epidemiological studies across Europe. The resulting Grid enabled biomedical information platform will be supported by robust search, optimization and matching techniques for information collected in hospitals across Europe. In particular, paediatricians will be provided with decision support, knowledge discovery and disease modelling applications that will access data in hospitals in the UK, Italy and France, integrated via the Grid. For economy of scale, reusability, extensibility, and maintainability, HeC is being developed on top of an EGEE/gLite [3] based infrastructure that provides all the common data and computation management services required by the applications. This paper discusses some of the major challenges in bio-medical data integr...

  17. Comprehensive risk assessment for rail transportation of dangerous goods: a validated platform for decision support

    International Nuclear Information System (INIS)

    Gheorghe, Adrian V.; Birchmeier, Juerg; Vamanu, Dan; Papazoglou, Ioannis; Kroeger, Wolfgang

    2005-01-01

    Currently, the most advanced and well documented risk assessments for the transportation of dangerous goods by railway take into account:(i)statistics-based loss of containment frequencies, (ii) specification of potential consequences for a given release situations using event tree methodology as an organisational tool and (iii) consequence calculation models to determine a risk figure known as CCDF (Complementary Cumulative Distribution Function). Such procedures for the risk assessment (including for example decision-making on preventive measures) may offer only a limited insight into the causes and sequences leading to an accident and do not allow for any kind of predictive analysis. The present work introduces an enhanced solution, and a related software platform, which attempts to integrate loss of containment causes and consequences with system's infrastructure and its environment. The solution features:(i)the use of a detailed Master Logical Diagram, including fault/event tree analysis to determine a loss of containment frequency based on different initiating events, scenarios and specific basic data, (ii) the characterization of a resulting source term following a release situation, and (iii) the calculation of various potential impacts on the neighbouring site. Results are wrapped into a CCDF format for each selected traffic segment. The risk-related results are integrated on a software platform, structured as a decision support system using intelligent maps and a variety of GIS (Geographical Information System) data processing procedures. The introduction of the hot spot approach, allows us to focus on the most risk-relevant areas and to use information on various railway infrastructure elements (e.g. points, tunnels), are the basis of the new models employed. The software is applicable to any railway transportation system, comprising its technical infrastructure, rolling stock, human actions, regulation and management procedures. It provides the

  18. Design and implementation of design content management platform

    International Nuclear Information System (INIS)

    Wang Yunfu; Zhang Jie

    2010-01-01

    Enterprise management is being transferred from a traditional database management to the content management with unstructured data management as the core. The content management is becoming part of enterprise information infrastructure and on the other hand, companies need a high-performance, high expansibility, able to support the rapid development of enterprise business and able to meet the changing needs of enterprise business content management platform. For a design institute, the document management system is developed based on Documentum software,and has been charged with documents with all the projects under construction and pre-project, project correspondence, records management, publishing management, library information management, authorization management,etc., and supply the services for the design and production systems. Based on the practical business applications, this paper sum up the architecture design and the gain and loss of the system design for the content management platform in design institute, analysis the supports relationship between the content management platform and other design and production systems, as well as the necessary for continuous improvement of work and finally propose the future development direction for' content management platform in the design institute. (authors)

  19. ForM@Ter: a French Solid Earth Research Infrastructure Project

    Science.gov (United States)

    Mandea, M.; Diament, M.; Jamet, O.; Deschamps-Ostanciaux, E.

    2017-12-01

    Recently, some noteworthy initiatives to develop efficient research e-infrastructures for the study of the Earth's system have been set up. However, some gaps between the data availability and their scientific use still exists, either because technical reasons (big data issues) or because of the lack of a dedicated support in terms of expert knowledge of the data, software availability, or data cost. The need for thematic cooperative platforms has been underlined over the last years, as well as the need to create thematic centres designed to federate the scientific community of Earth's observation. Four thematic data centres have been developed in France, covering the domains of ocean, atmosphere, land, and solid Earth sciences. For the Solid Earth science community, a research infrastructure project named ForM@Ter was launched by the French Space Agency (CNES) and the National Centre for Scientific Research (CNRS), with the active participation of the National institute for geographical and forestry information (IGN). Currently, it relies on the contributions of scientists from more than 20 French Earth science laboratories.Preliminary analysis have showed that a focus on the determination of the shape and movements of the Earth surface (ForM@Ter: Formes et Mouvements de la Terre) can federate a wide variety of scientific areas (earthquake cycle, tectonics, morphogenesis, volcanism, erosion dynamics, mantle rheology, geodesy) and offers many interfaces with other geoscience domains, such as glaciology or snow evolution. This choice motivates the design of an ambitious data distribution scheme, including a wide variety of sources - optical imagery, SAR, GNSS, gravity, satellite altimetry data, in situ observations (inclinometers, seismometers, etc.) - as well as a wide variety of processing techniques. In the evolving context of the current and forthcoming national and international e-infrastructures, the challenge of the project is to design a non

  20. Interoperability of remote handling control system software modules at Divertor Test Platform 2 using middleware

    International Nuclear Information System (INIS)

    Tuominen, Janne; Rasi, Teemu; Mattila, Jouni; Siuko, Mikko; Esque, Salvador; Hamilton, David

    2013-01-01

    Highlights: ► The prototype DTP2 remote handling control system is a heterogeneous collection of subsystems, each realizing a functional area of responsibility. ► Middleware provides well-known, reusable solutions to problems, such as heterogeneity, interoperability, security and dependability. ► A middleware solution was selected and integrated with the DTP2 RH control system. The middleware was successfully used to integrate all relevant subsystems and functionality was demonstrated. -- Abstract: This paper focuses on the inter-subsystem communication channels in a prototype distributed remote handling control system at Divertor Test Platform 2 (DTP2). The subsystems are responsible for specific tasks and, over the years, their development has been carried out using various platforms and programming languages. The communication channels between subsystems have different priorities, e.g. very high messaging rate and deterministic timing or high reliability in terms of individual messages. Generally, a control system's communication infrastructure should provide interoperability, scalability, performance and maintainability. An attractive approach to accomplish this is to use a standardized and proven middleware implementation. The selection of a middleware can have a major cost impact in future integration efforts. In this paper we present development done at DTP2 using the Object Management Group's (OMG) standard specification for Data Distribution Service (DDS) for ensuring communications interoperability. DDS has gained a stable foothold especially in the military field. It lacks a centralized broker, thereby avoiding a single-point-of-failure. It also includes an extensive set of Quality of Service (QoS) policies. The standard defines a platform- and programming language independent model and an interoperability wire protocol that enables DDS vendor interoperability, allowing software developers to avoid vendor lock-in situations

  1. Mission Exploitation Platform PROBA-V

    Science.gov (United States)

    Goor, Erwin

    2016-04-01

    VITO and partners developed an end-to-end solution to drastically improve the exploitation of the PROBA-V EO-data archive (http://proba-v.vgt.vito.be/), the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers and end-users. The analysis of time series of data (+1PB) is addressed, as well as the large scale on-demand processing of near real-time data. From November 2015 an operational Mission Exploitation Platform (MEP) PROBA-V, as an ESA pathfinder project, will be gradually deployed at the VITO data center with direct access to the complete data archive. Several applications will be released to the users, e.g. - A time series viewer, showing the evolution of PROBA-V bands and derived vegetation parameters for any area of interest. - Full-resolution viewing services for the complete data archive. - On-demand processing chains e.g. for the calculation of N-daily composites. - A Virtual Machine will be provided with access to the data archive and tools to work with this data, e.g. various toolboxes and support for R and Python. After an initial release in January 2016, a research platform will gradually be deployed allowing users to design, debug and test applications on the platform. From the MEP PROBA-V, access to Sentinel-2 and landsat data will be addressed as well, e.g. to support the Cal/Val activities of the users. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to the complete data archive. To realise this, private cloud technology (openStack) is used and a distributed processing environment is built based on Hadoop. The Hadoop ecosystem offers a lot of technologies (Spark, Yarn, Accumulo, etc.) which we integrate with several open-source components. The impact of this MEP on the user community will be high and will completely change the way of working with the data and hence open the large time series to a larger

  2. NEON's Mobile Deployment Platform: A Resource for Community Research

    Science.gov (United States)

    Sanclements, M.

    2015-12-01

    Here we provide an update on construction and validation of the NEON Mobile Deployment Platforms (MDPs) as well as a description of the infrastructure and sensors available to researchers in the future. The MDPs will provide the means to observe stochastic or spatially important events, gradients, or quantities that cannot be reliably observed using fixed location sampling (e.g. fires and floods). Due to the transient temporal and spatial nature of such events, the MDPs will be designed to accommodate rapid deployment for time periods up to ~ 1 year. Broadly, the MDPs will be comprised of infrastructure and instrumentation capable of functioning individually or in conjunction with one another to support observations of ecological change, as well as education, training and outreach.

  3. Common Technologies for Environmental Research Infrastructures in ENVRIplus

    Science.gov (United States)

    Paris, Jean-Daniel

    2016-04-01

    Environmental and geoscientific research infrastructures (RIs) are dedicated to distinct aspects of the ocean, atmosphere, ecosystems, or solid Earth research, yet there is significant commonality in the way they conceive, develop, operate and upgrade their observation systems and platforms. Many environmental Ris are distributed network of observatories (be it drifting buoys, geophysical observatories, ocean-bottom stations, atmospheric measurements sites) with needs for remote operations. Most RIs have to deal with calibration and standardization issues. RIs use a variety of measurements technologies, but this variety is based on a small, common set of physical principles. All RIs have set their own research and development priorities, and developed their solution to their problems - however many problems are common across RIs. Finally, RIs may overlap in terms of scientific perimeter. In ENVRIplus we aim, for the first time, to identify common opportunities for innovation, to support common research and development across RIs on promising issues, and more generally to create a forum to spread state of the art techniques among participants. ENVRIplus activities include 1) measurement technologies: where are the common types of measurement for which we can share expertise or common development? 2) Metrology : how do we tackle together the diversified challenge of quality assurance and standardization? 3) Remote operations: can we address collectively the need for autonomy, robustness and distributed data handling? And 4) joint operations for research: are we able to demonstrate that together, RIs are able to provide relevant information to support excellent research. In this process we need to nurture an ecosystem of key players. Can we involve all the key technologists of the European RIs for a greater mutual benefit? Can we pave the way to a growing common market for innovative European SMEs, with a common programmatic approach conducive to targeted R&D? Can we

  4. Implementing an SIG based platform of application and service for city spatial information in Shanghai

    Science.gov (United States)

    Yu, Bailang; Wu, Jianping

    2006-10-01

    Spatial Information Grid (SIG) is an infrastructure that has the ability to provide the services for spatial information according to users' needs by means of collecting, sharing, organizing and processing the massive distributed spatial information resources. This paper presents the architecture, technologies and implementation of the Shanghai City Spatial Information Application and Service System, a SIG based platform, which is an integrated platform that serves for administration, planning, construction and development of the city. In the System, there are ten categories of spatial information resources, including city planning, land-use, real estate, river system, transportation, municipal facility construction, environment protection, sanitation, urban afforestation and basic geographic information data. In addition, spatial information processing services are offered as a means of GIS Web Services. The resources and services are all distributed in different web-based nodes. A single database is created to store the metadata of all the spatial information. A portal site is published as the main user interface of the System. There are three main functions in the portal site. First, users can search the metadata and consequently acquire the distributed data by using the searching results. Second, some spatial processing web applications that developed with GIS Web Services, such as file format conversion, spatial coordinate transfer, cartographic generalization and spatial analysis etc, are offered to use. Third, GIS Web Services currently available in the System can be searched and new ones can be registered. The System has been working efficiently in Shanghai Government Network since 2005.

  5. Cement Distribution and Diagenetic Pathway of the Miocene Sediments on Kardiva Platform, Maldives.

    Science.gov (United States)

    Laya, J. C.; Prince, K.; Betzler, C.; Eberli, G. P.; Blättler, C. L.; Swart, P. K.; Reolid, J.; Alvarez Zarikian, C. A.; Reijmer, J.

    2017-12-01

    The Maldives archipelago is an ideal example for understanding the dynamics of isolated carbonate platforms. While previous sedimentological studies have focused on oceanographic and climatic controls on deposition, there have been limited studies on the diagenetic evolution of the Maldives archipelago. This project seeks to establish a relationship between the facies, cement distribution, and diagenetic evolution of the Kardiva Platform and associated diagenetic fluids. Samples from cores of IODP Expedition 359 at Sites U1645, U1469, and U1470 were analyzed for stable isotope geochemistry and detailed petrography including SEM, confocal and CL microscopy to investigate variations in facies, cements, porosity and diagenetic products. The facies analyzed consist mainly of planktonic and benthic foraminifers, red coralline algae, echinoderm, coral and skeletal fragments. The main facies include foraminifera grain/packstone, red algae rich grain/packstone, algal floatstone and coral floatstone. Those facies present a cyclic and general shallowing upwards trend. These facies are interpreted as shallow platform deposits on proximal areas to the margin associated with the oligophotic zone. Cement volume varies between 5% and 48%, and they have been classified as isopachous, bladed to fibrous (dog tooth), drusy and equant. Equant and drusy show recognizable growth bands with CL and confocal. Evidence of intense dissolution is shown by extensive moldic porosity within phreatic and limited vadose zones. In addition, dolomite appears as a replacement phase associated with red-algae-rich horizons and as cement on pore walls and voids. These deposits experienced a variety of diagenetic processes driven by the evolution of diagenetic fluid chemistry and by the nature of the skeletal components. Those processes can be tied to external controls such as climate (monsoonal effects), sea-level and currents.

  6. Design of e-Science platform for biomedical imaging research cross multiple academic institutions and hospitals

    Science.gov (United States)

    Zhang, Jianguo; Zhang, Kai; Yang, Yuanyuan; Ling, Tonghui; Wang, Tusheng; Wang, Mingqing; Hu, Haibo; Xu, Xuemin

    2012-02-01

    More and more image informatics researchers and engineers are considering to re-construct imaging and informatics infrastructure or to build new framework to enable multiple disciplines of medical researchers, clinical physicians and biomedical engineers working together in a secured, efficient, and transparent cooperative environment. In this presentation, we show an outline and our preliminary design work of building an e-Science platform for biomedical imaging and informatics research and application in Shanghai. We will present our consideration and strategy on designing this platform, and preliminary results. We also will discuss some challenges and solutions in building this platform.

  7. The Geohazards Exploitation Platform: an advanced cloud-based environment for the Earth Science community

    Science.gov (United States)

    Manunta, Michele; Casu, Francesco; Zinno, Ivana; De Luca, Claudio; Pacini, Fabrizio; Caumont, Hervé; Brito, Fabrice; Blanco, Pablo; Iglesias, Ruben; López, Álex; Briole, Pierre; Musacchio, Massimo; Buongiorno, Fabrizia; Stumpf, Andre; Malet, Jean-Philippe; Brcic, Ramon; Rodriguez Gonzalez, Fernando; Elias, Panagiotis

    2017-04-01

    different EO applications have been selected: time-series stereo-photogrammetric processing using optical images for landslides and tectonics movement monitoring with CNRS/EOST (FR), optical based processing method for volcanic hazard monitoring with INGV (IT), systematic generation of deformation time-series with Sentinel-1 data with CNR-IREA (IT), systematic processing of Sentinel-1 interferometric imagery with DLR (DE), terrain motion velocity map generation based on PSI processing by TRE-ALTAMIRA (ES) and a campaign to test and employ GEP applications with the Corinth Rift EPOS Near Fault Observatory. Finally, GEP is significantly contributing to the development of the satellite component of the European Plate Observing System (EPOS), a long-term plan to facilitate the integrated use of data, data products, and facilities from distributed research infrastructures for solid Earth science in Europe. In particular, GEP has been identified as gateway for the Thematic Core Service "Satellite Data" of EPOS, namely the platform through which the satellite EPOS services will be delivered. In the current work, latest activities and achievements of GEP, including the impact in the context of the distributed Research Infrastructures such as EPOS, will be presented and discussed.

  8. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  9. Experimental Platform for Usability Testing of Secure Medical Sensor Network Protocols

    DEFF Research Database (Denmark)

    Andersen, Jacob; Lo, Benny P.; Yang, Guang-Zhong

    2008-01-01

    designed security mechanisms are essential. Several experimental sensor network platforms have emerged in recent years targeted for clinical use. However, few of them consider the importance of security issues such as privacy and access control, and how these can impact the usability of the platform, while......Implementing security mechanisms such as access control for clinical use is a challenging research issue in BSN due to its required heterogeneous operating responses ranging from chronic diseases management to emergency care. To ensure the clinical uptake of the BSN technology, appropriately...... others develop BSN security without considering how a prototype implementation would be received by clinicians in real-life situations. The purpose of this paper is to present our initial effort in building a flexible experimental platform for providing a basic infrastructure with symmetric AES...

  10. Virtual Labs (Science Gateways) as platforms for Free and Open Source Science

    Science.gov (United States)

    Lescinsky, David; Car, Nicholas; Fraser, Ryan; Friedrich, Carsten; Kemp, Carina; Squire, Geoffrey

    2016-04-01

    The Free and Open Source Software (FOSS) movement promotes community engagement in software development, as well as provides access to a range of sophisticated technologies that would be prohibitively expensive if obtained commercially. However, as geoinformatics and eResearch tools and services become more dispersed, it becomes more complicated to identify and interface between the many required components. Virtual Laboratories (VLs, also known as Science Gateways) simplify the management and coordination of these components by providing a platform linking many, if not all, of the steps in particular scientific processes. These enable scientists to focus on their science, rather than the underlying supporting technologies. We describe a modular, open source, VL infrastructure that can be reconfigured to create VLs for a wide range of disciplines. Development of this infrastructure has been led by CSIRO in collaboration with Geoscience Australia and the National Computational Infrastructure (NCI) with support from the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service (ANDS). Initially, the infrastructure was developed to support the Virtual Geophysical Laboratory (VGL), and has subsequently been repurposed to create the Virtual Hazards Impact and Risk Laboratory (VHIRL) and the reconfigured Australian National Virtual Geophysics Laboratory (ANVGL). During each step of development, new capabilities and services have been added and/or enhanced. We plan on continuing to follow this model using a shared, community code base. The VL platform facilitates transparent and reproducible science by providing access to both the data and methodologies used during scientific investigations. This is further enhanced by the ability to set up and run investigations using computational resources accessed through the VL. Data is accessed using registries pointing to catalogues within public data repositories (notably including the

  11. WindS@UP: The e-Science Platform for WindScanner.eu

    Science.gov (United States)

    Gomes, Filipe; Correia Lopes, João; Laginha Palma, José; Frölén Ribeiro, Luís

    2014-06-01

    The WindScanner e-Science platform architecture and the underlying premises are discussed. It is a collaborative platform that will provide a repository for experimental data and metadata. Additional data processing capabilities will be incorporated thus enabling in-situ data processing. Every resource in the platform is identified by a Uniform Resource Identifier (URI), enabling an unequivocally identification of the field(s) campaign(s) data sets and metadata associated with the data set or experience. This feature will allow the validation of field experiment results and conclusions as all managed resources will be linked. A centralised node (Hub) will aggregate the contributions of 6 to 8 local nodes from EC countries and will manage the access of 3 types of users: data-curator, data provider and researcher. This architecture was designed to ensure consistent and efficient research data access and preservation, and exploitation of new research opportunities provided by having this "Collaborative Data Infrastructure". The prototype platform-WindS@UP-enables the usage of the platform by humans via a Web interface or by machines using an internal API (Application Programming Interface). Future work will improve the vocabulary ("application profile") used to describe the resources managed by the platform.

  12. Information Management Platform for Data Analytics and Aggregation (IMPALA) System Design Document

    Science.gov (United States)

    Carnell, Andrew; Akinyelu, Akinyele

    2016-01-01

    The System Design document tracks the design activities that are performed to guide the integration, installation, verification, and acceptance testing of the IMPALA Platform. The inputs to the design document are derived from the activities recorded in Tasks 1 through 6 of the Statement of Work (SOW), with the proposed technical solution being the completion of Phase 1-A. With the documentation of the architecture of the IMPALA Platform and the installation steps taken, the SDD will be a living document, capturing the details about capability enhancements and system improvements to the IMPALA Platform to provide users in development of accurate and precise analytical models. The IMPALA Platform infrastructure team, data architecture team, system integration team, security management team, project manager, NASA data scientists and users are the intended audience of this document. The IMPALA Platform is an assembly of commercial-off-the-shelf (COTS) products installed on an Apache-Hadoop platform. User interface details for the COTS products will be sourced from the COTS tools vendor documentation. The SDD is a focused explanation of the inputs, design steps, and projected outcomes of every design activity for the IMPALA Platform through installation and validation.

  13. Drone Mission Definition and Implementation for Automated Infrastructure Inspection Using Airborne Sensors.

    Science.gov (United States)

    Besada, Juan A; Bergesio, Luca; Campaña, Iván; Vaquero-Melchor, Diego; López-Araquistain, Jaime; Bernardos, Ana M; Casar, José R

    2018-04-11

    This paper describes a Mission Definition System and the automated flight process it enables to implement measurement plans for discrete infrastructure inspections using aerial platforms, and specifically multi-rotor drones. The mission definition aims at improving planning efficiency with respect to state-of-the-art waypoint-based techniques, using high-level mission definition primitives and linking them with realistic flight models to simulate the inspection in advance. It also provides flight scripts and measurement plans which can be executed by commercial drones. Its user interfaces facilitate mission definition, pre-flight 3D synthetic mission visualisation and flight evaluation. Results are delivered for a set of representative infrastructure inspection flights, showing the accuracy of the flight prediction tools in actual operations using automated flight control.

  14. A Model Collaborative Platform for Geoscience Education

    Science.gov (United States)

    Fox, S.; Manduca, C. A.; Iverson, E. A.

    2012-12-01

    Over the last decade SERC at Carleton College has developed a collaborative platform for geoscience education that has served dozens of projects, thousands of community authors and millions of visitors. The platform combines a custom technical infrastructure: the SERC Content Management system (CMS), and a set of strategies for building web-resources that can be disseminated through a project site, reused by other projects (with attribution) or accessed via an integrated geoscience education resource drawing from all projects using the platform. The core tools of the CMS support geoscience education projects in building project-specific websites. Each project uses the CMS to engage their specific community in collecting, authoring and disseminating the materials of interest to them. At the same time the use of a shared central infrastructure allows cross-fertilization among these project websites. Projects are encouraged to use common templates and common controlled vocabularies for organizing and displaying their resources. This standardization is then leveraged through cross-project search indexing which allow projects to easily incorporate materials from other projects within their own collection in ways that are relevant and automated. A number of tools are also in place to help visitors move among project websites based on their personal interests. Related links help visitors discover content related topically to their current location that is in a 'separate' project. A 'best bets' feature in search helps guide visitors to pages that are good starting places to explore resources on a given topic across the entire range of hosted projects. In many cases these are 'site guide' pages created specifically to promote a cross-project view of the available resources. In addition to supporting the cross-project exploration of specific themes the CMS also allows visitors to view the combined suite of resources authored by any particular community member. Automatically

  15. Applications integration in a hybrid cloud computing environment: modelling and platform

    Science.gov (United States)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  16. Raising Virtual Laboratories in Australia onto global platforms

    Science.gov (United States)

    Wyborn, L. A.; Barker, M.; Fraser, R.; Evans, B. J. K.; Moloney, G.; Proctor, R.; Moise, A. F.; Hamish, H.

    2016-12-01

    Across the globe, Virtual Laboratories (VLs), Science Gateways (SGs), and Virtual Research Environments (VREs) are being developed that enable users who are not co-located to actively work together at various scales to share data, models, tools, software, workflows, best practices, etc. Outcomes range from enabling `long tail' researchers to more easily access specific data collections, to facilitating complex workflows on powerful supercomputers. In Australia, government funding has facilitated the development of a range of VLs through the National eResearch Collaborative Tools and Resources (NeCTAR) program. The VLs provide highly collaborative, research-domain oriented, integrated software infrastructures that meet user community needs. Twelve VLs have been funded since 2012, including the Virtual Geophysics Laboratory (VGL); Virtual Hazards, Impact and Risk Laboratory (VHIRL); Climate and Weather Science Laboratory (CWSLab); Marine Virtual Laboratory (MarVL); and Biodiversity and Climate Change Virtual Laboratory (BCCVL). These VLs share similar technical challenges, with common issues emerging on integration of tools, applications and access data collections via both cloud-based environments and other distributed resources. While each VL began with a focus on a specific research domain, communities of practice have now formed across the VLs around common issues, and facilitate identification of best practice case studies, and new standards. As a result, tools are now being shared where the VLs access data via data services using international standards such as ISO, OGC, W3C. The sharing of these approaches is starting to facilitate re-usability of infrastructure and is a step towards supporting interdisciplinary research. Whilst the focus of the VLs are Australia-centric, by using standards, these environments are able to be extended to analysis on other international datasets. Many VL datasets are subsets of global datasets and so extension to global is a

  17. Fully distributed monitoring architecture supporting multiple trackees and trackers in indoor mobile asset management application.

    Science.gov (United States)

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-03-21

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.

  18. Chromium Renderserver: Scalable and Open Source Remote RenderingInfrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Brian; Ahern, Sean; Bethel, E. Wes; Brugger, Eric; Cook,Rich; Daniel, Jamison; Lewis, Ken; Owen, Jens; Southard, Dale

    2007-12-01

    Chromium Renderserver (CRRS) is software infrastructure thatprovides the ability for one or more users to run and view image outputfrom unmodified, interactive OpenGL and X11 applications on a remote,parallel computational platform equipped with graphics hardwareaccelerators via industry-standard Layer 7 network protocolsand clientviewers. The new contributions of this work include a solution to theproblem of synchronizing X11 and OpenGL command streams, remote deliveryof parallel hardware-accelerated rendering, and a performance analysis ofseveral different optimizations that are generally applicable to avariety of rendering architectures. CRRSis fully operational, Open Sourcesoftware.

  19. Infrastructure Joint Venture Projects in Malaysia: A Preliminary Study

    Science.gov (United States)

    Romeli, Norsyakilah; Muhamad Halil, Faridah; Ismail, Faridah; Sufian Hasim, Muhammad

    2018-03-01

    As many developed country practise, the function of the infrastructure is to connect the each region of Malaysia holistically and infrastructure is an investment network projects such as transportation water and sewerage, power, communication and irrigations system. Hence, a billions allocations of government income reserved for the sake of the infrastructure development. Towards a successful infrastructure development, a joint venture approach has been promotes by 2016 in one of the government thrust in Construction Industry Transformation Plan which encourage the internationalisation among contractors. However, there is depletion in information on the actual practise of the infrastructure joint venture projects in Malaysia. Therefore, this study attempt to explore the real application of the joint venture in Malaysian infrastructure projects. Using the questionnaire survey, a set of survey question distributed to the targeted respondents. The survey contained three section which the sections are respondent details, organizations background and project capital in infrastructure joint venture project. The results recorded and analyse using SPSS software. The contractors stated that they have implemented the joint venture practice with mostly the client with the usual construction period of the infrastructure project are more than 5 years. Other than that, the study indicates that there are problems in the joint venture project in the perspective of the project capital and the railway infrastructure should be given a highlights in future study due to its high significant in term of cost and technical issues.

  20. Infrastructure Joint Venture Projects in Malaysia: A Preliminary Study

    Directory of Open Access Journals (Sweden)

    Romeli Norsyakilah

    2018-01-01

    Full Text Available As many developed country practise, the function of the infrastructure is to connect the each region of Malaysia holistically and infrastructure is an investment network projects such as transportation water and sewerage, power, communication and irrigations system. Hence, a billions allocations of government income reserved for the sake of the infrastructure development. Towards a successful infrastructure development, a joint venture approach has been promotes by 2016 in one of the government thrust in Construction Industry Transformation Plan which encourage the internationalisation among contractors. However, there is depletion in information on the actual practise of the infrastructure joint venture projects in Malaysia. Therefore, this study attempt to explore the real application of the joint venture in Malaysian infrastructure projects. Using the questionnaire survey, a set of survey question distributed to the targeted respondents. The survey contained three section which the sections are respondent details, organizations background and project capital in infrastructure joint venture project. The results recorded and analyse using SPSS software. The contractors stated that they have implemented the joint venture practice with mostly the client with the usual construction period of the infrastructure project are more than 5 years. Other than that, the study indicates that there are problems in the joint venture project in the perspective of the project capital and the railway infrastructure should be given a highlights in future study due to its high significant in term of cost and technical issues.

  1. THE STUDY OF THE FORECASTING PROCESS INFRASTRUCTURAL SUPPORT BUSINESS

    Directory of Open Access Journals (Sweden)

    E. V. Sibirskaia

    2014-01-01

    Full Text Available Summary. When forecasting the necessary infrastructural support entrepreneurship predict rational distribution of the potential and expected results based on capacity development component of infrastructural maintenance, efficient use of resources, expertise and development of regional economies, the rationalization of administrative decisions, etc. According to the authors, the process of predicting business infrastructure software includes the following steps: analysis of the existing infrastructure support business to the top of the forecast period, the structure of resources, identifying disparities, their causes, identifying positive trends in the analysis and the results of research; research component of infrastructural support entrepreneurship, assesses complex system of social relations, institutions, structures and objects made findings and conclusions of the study; identification of areas of strategic change and the possibility of eliminating weaknesses and imbalances, identifying prospects for the development of entrepreneurship; identifying a set of factors and conditions affecting each component of infrastructure software, calculated the degree of influence of each of them and the total effect of all factors; adjustment indicators infrastructure forecasts. Research of views of category says a method of strategic planning and forecasting that methods of strategic planning are considered separately from forecasting methods. In a combination methods of strategic planning and forecasting, in relation to infrastructure ensuring business activity aren't given in literature. Nevertheless, authors consider that this category should be defined for the characteristic of the intrinsic and substantial nature of strategic planning and forecasting of infrastructure ensuring business activity.processing.

  2. A Scalable Infrastructure for Lidar Topography Data Distribution, Processing, and Discovery

    Science.gov (United States)

    Crosby, C. J.; Nandigam, V.; Krishnan, S.; Phan, M.; Cowart, C. A.; Arrowsmith, R.; Baru, C.

    2010-12-01

    High-resolution topography data acquired with lidar (light detection and ranging) technology have emerged as a fundamental tool in the Earth sciences, and are also being widely utilized for ecological, planning, engineering, and environmental applications. Collected from airborne, terrestrial, and space-based platforms, these data are revolutionary because they permit analysis of geologic and biologic processes at resolutions essential for their appropriate representation. Public domain lidar data collection by federal, state, and local agencies are a valuable resource to the scientific community, however the data pose significant distribution challenges because of the volume and complexity of data that must be stored, managed, and processed. Lidar data acquisition may generate terabytes of data in the form of point clouds, digital elevation models (DEMs), and derivative products. This massive volume of data is often challenging to host for resource-limited agencies. Furthermore, these data can be technically challenging for users who lack appropriate software, computing resources, and expertise. The National Science Foundation-funded OpenTopography Facility (www.opentopography.org) has developed a cyberinfrastructure-based solution to enable online access to Earth science-oriented high-resolution lidar topography data, online processing tools, and derivative products. OpenTopography provides access to terabytes of point cloud data, standard DEMs, and Google Earth image data, all co-located with computational resources for on-demand data processing. The OpenTopography portal is built upon a cyberinfrastructure platform that utilizes a Services Oriented Architecture (SOA) to provide a modular system that is highly scalable and flexible enough to support the growing needs of the Earth science lidar community. OpenTopography strives to host and provide access to datasets as soon as they become available, and also to expose greater application level functionalities to

  3. High-Altitude Platforms - Present Situation and Technology Trends

    Directory of Open Access Journals (Sweden)

    Flavio Araripe D'Oliveira

    2016-07-01

    Full Text Available High-altitude platforms (HAPs are aircraft, usually unmanned airships or airplanes positioned above 20 km, in the stratosphere, in order to compose a telecommunications network or perform remote sensing. In the 1990 and 2000 decades, several projects were launched, but very few had continued. In 2014, 2 major Internet companies (Google and Facebook announced investments in new HAP projects to provide Internet access in regions without communication infrastructure (terrestrial or satellite, bringing back attention to the development of HAP. This article aims to survey the history of HAPs, the current state-of-the-art (April 2016, technology trends and challenges. The main focus of this review will be on technologies directly related to the aerial platform, inserted in the aeronautical engineering field of knowledge, not detailing aspects of the telecommunications area.

  4. Integrated Approach to a Resilient City: Associating Social, Environmental and Infrastructure Resilience in its Whole

    Directory of Open Access Journals (Sweden)

    Birutė PITRĖNAITĖ-ŽILĖNIENĖ

    2014-12-01

    Full Text Available Rising complexity, numbers and severity of natural and manmade disasters enhance the importance of reducing vulnerability, or on contrary – increasing resilience, of different kind of systems, including those of social, engineering (infrastructure, and environmental (ecological nature. The goal of this research is to explore urban resilience as an integral system of social, environmental, and engineering resilience. This report analyses the concepts of each kind of resilience and identifies key factors influencing social, ecological, and infrastructure resilience discussing how these factors relate within urban systems. The achievement of resilience of urban and regional systems happens through the interaction of the different elements (social, psychological, physical, structural, and environmental, etc.; therefore, resilient city could be determined by synergy of resilient society, resilient infrastructure and resilient environment of the given area. Based on literature analysis, the current research provides some insights on conceptual framework for assessment of complex urban systems in terms of resilience. To be able to evaluate resilience and define effective measures for prevention and risk mitigation, and thereby strengthen resilience, we propose to develop an e-platform, joining risk parameters’ Monitoring Systems, which feed with data Resiliency Index calculation domain. Both these elements result in Multirisk Platform, which could serve for awareness and shared decision making for resilient people in resilient city.

  5. Space Transportation Infrastructure Supported By Propellant Depots

    Science.gov (United States)

    Smitherman, David; Woodcock, Gordon

    2012-01-01

    A space transportation infrastructure is described that utilizes propellant depot servicing platforms to support all foreseeable missions in the Earth-Moon vicinity and deep space out to Mars. The infrastructure utilizes current expendable launch vehicle (ELV) systems such as the Delta IV Heavy, Atlas V, and Falcon 9, for all crew, cargo, and propellant launches to orbit. Propellant launches are made to Low-Earth-Orbit (LEO) Depot and an Earth-Moon Lagrange Point 1 (L1) Depot to support a new reusable in-space transportation vehicles. The LEO Depot supports missions to Geosynchronous Earth Orbit (GEO) for satellite servicing and to L1 for L1 Depot missions. The L1 Depot supports Lunar, Earth-Sun L2 (ESL2), Asteroid and Mars Missions. New vehicle design concepts are presented that can be launched on current 5 meter diameter ELV systems. These new reusable vehicle concepts include a Crew Transfer Vehicle (CTV) for crew transportation between the LEO Depot, L1 Depot and missions beyond L1; a new reusable lunar lander for crew transportation between the L1 Depot and the lunar surface; and Mars orbital Depot are based on International Space Station (ISS) heritage hardware. Data provided includes the number of launches required for each mission utilizing current ELV systems (Delta IV Heavy or equivalent) and the approximate vehicle masses and propellant requirements. Also included is a discussion on affordability with ideas on technologies that could reduce the number of launches required and thoughts on how this infrastructure include competitive bidding for ELV flights and propellant services, developments of new reusable in-space vehicles and development of a multiuse infrastructure that can support many government and commercial missions simultaneously.

  6. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  7. Moving Virtual Research Environments from high maintenance Stovepipes to Multi-purpose Sustainable Service-oriented Science Platforms

    Science.gov (United States)

    Klump, Jens; Fraser, Ryan; Wyborn, Lesley; Friedrich, Carsten; Squire, Geoffrey; Barker, Michelle; Moloney, Glenn

    2017-04-01

    The researcher of today is likely to be part of a team distributed over multiple sites that will access data from an external repository and then process the data on a public or private cloud or even on a large centralised supercomputer. They are increasingly likely to use a mixture of their own code, third party software and libraries, or even access global community codes. These components will be connected into a Virtual Research Environments (VREs) that will enable members of the research team who are not co-located to actively work together at various scales to share data, models, tools, software, workflows, best practices, infrastructures, etc. Many VRE's are built in isolation: designed to meet a specific research program with components tightly coupled and not capable of being repurposed for other use cases - they are becoming 'stovepipes'. The limited number of users of some VREs also means that the cost of maintenance per researcher can be unacceptably high. The alternative is to develop service-oriented Science Platforms that enable multiple communities to develop specialised solutions for specific research programs. The platforms can offer access to data, software tools and processing infrastructures (cloud, supercomputers) through globally distributed, interconnected modules. In Australia, the Virtual Geophysics Laboratory (VGL) was initially built to enable a specific set of researchers in government agencies access to specific data sets and a limited number of tools, that is now rapidly evolving into a multi-purpose Earth science platform with access to an increased variety of data, a broader range of tools, users from more sectors and a diversity of computational infrastructures. The expansion has been relatively easy, because of the architecture whereby data, tools and compute resources are loosely coupled via interfaces that are built on international standards and accessed as services wherever possible. In recent years, investments in

  8. High-Altitude Platforms — Present Situation and Technology Trends

    OpenAIRE

    d’Oliveira, Flavio Araripe; Melo, Francisco Cristovão Lourenço de; Devezas, Tessaleno Campos

    2016-01-01

    ABSTRACT High-altitude platforms (HAPs) are aircraft, usually unmanned airships or airplanes positioned above 20 km, in the stratosphere, in order to compose a telecommunications network or perform remote sensing. In the 1990 and 2000 decades, several projects were launched, but very few had continued. In 2014, 2 major Internet companies (Google and Facebook) announced investments in new HAP projects to provide Internet access in regions without communication infrastructure (terrestrial or sa...

  9. Wireless Sensor Network-Based Service Provisioning by a Brokering Platform.

    Science.gov (United States)

    Guijarro, Luis; Pla, Vicent; Vidal, Jose R; Naldi, Maurizio; Mahmoodi, Toktam

    2017-05-12

    This paper proposes a business model for providing services based on the Internet of Things through a platform that intermediates between human users and Wireless Sensor Networks (WSNs). The platform seeks to maximize its profit through posting both the price charged to each user and the price paid to each WSN. A complete analysis of the profit maximization problem is performed in this paper. We show that the service provider maximizes its profit by incentivizing all users and all Wireless Sensor Infrastructure Providers (WSIPs) to join the platform. This is true not only when the number of users is high, but also when it is moderate, provided that the costs that the users bear do not trespass a cost ceiling. This cost ceiling depends on the number of WSIPs, on the value of the intrinsic value of the service and on the externality that the WSIP has on the user utility.

  10. Designing a concept for an IT-infrastructure for an integrated research and treatment center.

    Science.gov (United States)

    Stäubert, Sebastian; Winter, Alfred; Speer, Ronald; Löffler, Markus

    2010-01-01

    Healthcare and medical research in Germany are heading to more interconnected systems. New initiatives are funded by the German government to encourage the development of Integrated Research and Treatment Centers (IFB). Within an IFB new organizational structures and infrastructures for interdisciplinary, translational and trans-sectoral working relationship between existing rigid separated sectors are intended and needed. This paper describes how an IT-infrastructure of an IFB could look like, what major challenges have to be solved and what methods can be used to plan such a complex IT-infrastructure in the field of healthcare. By means of project management, system analyses, process models, 3LGM2-models and resource plans an appropriate concept with different views is created. This concept supports the information management in its enterprise architecture planning activities and implies a first step of implementing a connected healthcare and medical research platform.

  11. The occurrence of the cold-water coral Lophelia pertusa (Scleractinia) on oil and gas platforms in the North Sea: Colony growth recruitment and environmental controls on distribution

    Energy Technology Data Exchange (ETDEWEB)

    Gass, S.E.; Roberts, J.M. [Scottish Association for Marine Science, Dunstaffnage Marine Laboratory, Oban, Argyll (United Kingdom)

    2006-05-15

    This study reports a newly established sub-population of Lophelia pertusa, the dominant reef-framework forming coral species in the north-east Atlantic, on oil and gas platforms in the northern North Sea. L. pertusa was positively identified on 13 of 14 platforms examined using existing oil and gas industry visual inspections. Two platforms were inspected in more detail to examine depth and colony size distributions. We recorded 947 colonies occurring between 59 and 132 m depth that coincides with cold Atlantic water at depths below the summer thermocline in the northern North Sea. We suggest that these colonies provide evidence for a planktonic larval stage of L. pertusa with recruits initially originating from populations in the north-east Atlantic and now self recruiting to the platforms. Size class distribution showed a continuous range of size classes, but with few outlying large colonies. The break between the largest colonies and the rest of the population is considered as the point when colonies began self recruiting to the platforms, resulting in greater colonization success. We present the first documented in situ colony growth rate estimate (26 {+-} 5 mm yr{sup -1}) for L. pertusa based on 15 colonies from the Tern Alpha platform with evidence for yearly recruitment events starting the year the platform was installed. Evidence of contamination from drill muds and cuttings was observed on the Heather platform but appeared limited to regions close to drilling discharge points, where colonies experience partial as well as whole colony mortality. (author)

  12. The National Mechatronic Platform. The basis of the educational programs in the knowledge society

    Science.gov (United States)

    Maties, V.

    2016-08-01

    The shift from the information society to the knowledge based society caused by the mechatronic revolution, that took place in the 9th decade of the last century, launched a lot of challenges for education and researches activities too. Knowledge production development asks for new educational technologies to stimulate the initiative and creativity as a base to increase the productivity in the knowledge production. The paper presents details related on the innovative potential of mechatronics as educational environment for transdisciplinarity learning and integral education. The basic infrastructure of that environment is based on mechatronic platforms. In order to develop the knowledge production at the national level the specific structures are to be developed. The paper presents details related on the structure of the National Mechatronic Platform as a true knowledge factory. The benefits of the effort to develop the specific infrastructure for knowledge production in the field of mechatronics are outlined too.

  13. Security infrastructure for on-demand provisioned Cloud infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; de Laat, C.; Wlodarczyk, T.W.; Rong, C.; Ziegler, W.

    2011-01-01

    Providing consistent security services in on-demand provisioned Cloud infrastructure services is of primary importance due to multi-tenant and potentially multi-provider nature of Clouds Infrastructure as a Service (IaaS) environment. Cloud security infrastructure should address two aspects of the

  14. CERNBox: Petabyte-Scale Cloud Synchronisation and Sharing Platform

    OpenAIRE

    Hugo González Labrador

    2016-01-01

    CERNBox is a cloud synchronisation service for end-users: it allows syncing and sharing files on all major mobile and desktop platforms (Linux, Windows, MacOSX, Android, iOS) aiming to provide offline availability to any data stored in the CERN EOS infrastructure. There is a high demand in the community for an easily accessible cloud storage solution such as CERNBox. Integration of the CERNBox service with the EOS storage back-end is the next step towards providing ’synchronisation and sharin...

  15. Service-oriented advanced metering infrastructure for smart grids

    NARCIS (Netherlands)

    Chen, S.; Lukkien, J.J.; Zhang, L.

    2011-01-01

    Advanced Metering Infrastructure (AMI) enables smart grids to involve power consumers in the business process of power generation transmission, distribution and consumption. However, the participant of consumers challenges the current power systems with system integration and cooperation and

  16. Service-oriented advanced metering infrastructure for smart grids

    NARCIS (Netherlands)

    Chen, S.; Lukkien, J.J.; Zhang, L.

    2010-01-01

    Advanced Metering Infrastructure (AMI) enables smart grids to involve power consumers in the business process of power generation, transmission, distribution and consumption. However, the participant of consumers challenges the current power systems with system integration and cooperation and

  17. Adapting Water Infrastructure to Non-stationary Climate Changes

    Science.gov (United States)

    Water supply and sanitation are carried out by three major types of water infrastructure: drinking water treatment and distribution, wastewater collection and treatment, and storm water collection and management. Their sustainability is measured by resilience against and adapta...

  18. Design challenges in nanoparticle-based platforms: Implications for targeted drug delivery systems

    Science.gov (United States)

    Mullen, Douglas Gurnett

    Characterization and control of heterogeneous distributions of nanoparticle-ligand components are major design challenges for nanoparticle-based platforms. This dissertation begins with an examination of poly(amidoamine) (PAMAM) dendrimer-based targeted delivery platform. A folic acid targeted modular platform was developed to target human epithelial cancer cells. Although active targeting was observed in vitro, active targeting was not found in vivo using a mouse tumor model. A major flaw of this platform design was that it did not provide for characterization or control of the component distribution. Motivated by the problems experienced with the modular design, the actual composition of nanoparticle-ligand distributions were examined using a model dendrimer-ligand system. High Pressure Liquid Chromatography (HPLC) resolved the distribution of components in samples with mean ligand/dendrimer ratios ranging from 0.4 to 13. A peak fitting analysis enabled the quantification of the component distribution. Quantified distributions were found to be significantly more heterogeneous than commonly expected and standard analytical parameters, namely the mean ligand/nanoparticle ratio, failed to adequately represent the component heterogeneity. The distribution of components was also found to be sensitive to particle modifications that preceded the ligand conjugation. With the knowledge gained from this detailed distribution analysis, a new platform design was developed to provide a system with dramatically improved control over the number of components and with improved batch reproducibility. Using semi-preparative HPLC, individual dendrimer-ligand components were isolated. The isolated dendrimer with precise numbers of ligands were characterized by NMR and analytical HPLC. In total, nine different dendrimer-ligand components were obtained with degrees of purity ≥80%. This system has the potential to serve as a platform to which a precise number of functional molecules

  19. Effect of platform connection and abutment material on stress distribution in single anterior implant-supported restorations: a nonlinear 3-dimensional finite element analysis.

    Science.gov (United States)

    Carvalho, Marco Aurélio; Sotto-Maior, Bruno Salles; Del Bel Cury, Altair Antoninha; Pessanha Henriques, Guilherme Elias

    2014-11-01

    Although various abutment connections and materials have recently been introduced, insufficient data exist regarding the effect of stress distribution on their mechanical performance. The purpose of this study was to investigate the effect of different abutment materials and platform connections on stress distribution in single anterior implant-supported restorations with the finite element method. Nine experimental groups were modeled from the combination of 3 platform connections (external hexagon, internal hexagon, and Morse tapered) and 3 abutment materials (titanium, zirconia, and hybrid) as follows: external hexagon-titanium, external hexagon-zirconia, external hexagon-hybrid, internal hexagon-titanium, internal hexagon-zirconia, internal hexagon-hybrid, Morse tapered-titanium, Morse tapered-zirconia, and Morse tapered-hybrid. Finite element models consisted of a 4×13-mm implant, anatomic abutment, and lithium disilicate central incisor crown cemented over the abutment. The 49 N occlusal loading was applied in 6 steps to simulate the incisal guidance. Equivalent von Mises stress (σvM) was used for both the qualitative and quantitative evaluation of the implant and abutment in all the groups and the maximum (σmax) and minimum (σmin) principal stresses for the numerical comparison of the zirconia parts. The highest abutment σvM occurred in the Morse-tapered groups and the lowest in the external hexagon-hybrid, internal hexagon-titanium, and internal hexagon-hybrid groups. The σmax and σmin values were lower in the hybrid groups than in the zirconia groups. The stress distribution concentrated in the abutment-implant interface in all the groups, regardless of the platform connection or abutment material. The platform connection influenced the stress on abutments more than the abutment material. The stress values for implants were similar among different platform connections, but greater stress concentrations were observed in internal connections

  20. A Power Hardware-in-the-Loop Platform with Remote Distribution Circuit Cosimulation

    Energy Technology Data Exchange (ETDEWEB)

    Palmintier, Bryan; Lundstrom, Blake; Chakraborty, Sudipta; Williams, Tess L.; Schneider, Kevin P.; Chassin, David P.

    2015-04-01

    This paper demonstrates the use of a novel cosimulation architecture that integrates hardware testing using Power Hardware-in-the-Loop (PHIL) with larger-scale electric grid models using off-the-shelf, non-PHIL software tools. This architecture enables utilities to study the impacts of emerging energy technologies on their system and manufacturers to explore the interactions of new devices with existing and emerging devices on the power system, both without the need to convert existing grid models to a new platform or to conduct in-field trials. The paper describes an implementation of this architecture for testing two residential-scale advanced solar inverters at separate points of common coupling. The same hardware setup is tested with two different distribution feeders (IEEE 123 and 8500 node test systems) modeled using GridLAB-D. In addition to simplifying testing with multiple feeders, the architecture demonstrates additional flexibility with hardware testing in one location linked via the Internet to software modeling in a remote location. In testing, inverter current, real and reactive power, and PCC voltage are well captured by the co-simulation platform. Testing of the inverter advanced control features is currently somewhat limited by the software model time step (1 sec) and tested communication latency (24 msec). Overshoot induced oscillations are observed with volt/VAR control delays of 0 and 1.5 sec, while 3.4 sec and 5.5 sec delays produced little or no oscillation. These limitations could be overcome using faster modeling and communication within the same co-simulation architecture.

  1. SQoS as the Base for Next Generation Global Infrastructure

    DEFF Research Database (Denmark)

    Madsen, Ole Brun; Knudsen, Thomas Phillip; Pedersen, Jens Myrup

    2003-01-01

    The convergence towards a unified global WAN platform, providing both best effort services and guaranteed high quality services, sets the agenda for the design and implementation of the next generation global information infrastructure. The absence of design principles, allowing for smooth and cost...... efficient scalability without loss of control over the structurally based properties may prevent or seriously delay the introduction of globally available new application and switching services. Reliability and scalability issues are addressed from a structural viewpoint. The concept of Structural Quality...

  2. SQoS as the Base for Next Generation Global Infrastructure

    DEFF Research Database (Denmark)

    Madsen, Ole Brun; Knudsen, Thomas Phillip; Pedersen, Jens Myrup

    The convergence towards a unified global WAN platform, providing both best effort services and guaranteed high quality services, sets the agenda for the design and implementation of the next generation global information infrastructure. The absence of design principles, allowing for smooth and cost...... efficient scalability without loss of control over the structurally based properties may prevent or seriously delay the introduction of globally available new application and switching services.Reliability and scalability issues are addressed from a structural viewpoint. The concept of Structural Quality...

  3. INFORMATION INFRASTRUCTURE OF THE EDUCATIONAL ENVIRONMENT WITH VIRTUAL MACHINE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Artem D. Beresnev

    2014-09-01

    Full Text Available Subject of research. Information infrastructure for the training environment with application of technology of virtual computers for small pedagogical systems (separate classes, author's courses is created and investigated. Research technique. The life cycle model of information infrastructure for small pedagogical systems with usage of virtual computers in ARIS methodology is constructed. The technique of information infrastructure formation with virtual computers on the basis of process approach is offered. The model of an event chain in combination with the environment chart is used as the basic model. For each function of the event chain the necessary set of means of information and program support is defined. Technique application is illustrated on the example of information infrastructure design for the educational environment taking into account specific character of small pedagogical systems. Advantages of the designed information infrastructure are: the maximum usage of open or free components; the usage of standard protocols (mainly, HTTP and HTTPS; the maximum portability (application servers can be started up on any of widespread operating systems; uniform interface to management of various virtualization platforms, possibility of inventory of contents of the virtual computer without its start, flexible inventory management of the virtual computer by means of adjusted chains of rules. Approbation. Approbation of obtained results was carried out on the basis of training center "Institute of Informatics and Computer Facilities" (Tallinn, Estonia. Technique application within the course "Computer and Software Usage" gave the possibility to get half as much the number of refusals for components of the information infrastructure demanding intervention of the technical specialist, and also the time for elimination of such malfunctions. Besides, the pupils who have got broader experience with computer and software, showed better results

  4. Assessing the social sustainability contribution of an infrastructure project under conditions of uncertainty

    International Nuclear Information System (INIS)

    Sierra, Leonardo A.; Yepes, Víctor; Pellicer, Eugenio

    2017-01-01

    Assessing the viability of a public infrastructure includes economic, technical and environmental aspects; however, on many occasions, the social aspects are not always adequately considered. This article proposes a procedure to estimate the social sustainability of infrastructure projects under conditions of uncertainty, based on a multicriteria deterministic method. The variability of the method inputs is contributed by the decision-makers. Uncertain inputs are treated through uniform and beta PERT distributions. The Monte Carlo method is used to propagate uncertainty in the method. A case study of a road infrastructure improvement in El Salvador is used to illustrate this treatment. The main results determine the variability of the short and long-term social improvement indices by infrastructure and the probability of the position in the prioritization of the alternatives. The proposed mechanism improves the reliability of the decision making early in infrastructure projects, taking their social contribution into account. The results can complement environmental and economic sustainability assessments. - Highlights: •Estimate the social sustainability of infrastructure projects under conditions of uncertainty •The method uses multicriteria and Monte Carlo techniques and beta PERT distributions •Determines variability of the short and long term social improvement •Determines probability in the prioritization of alternatives •Improves reliability of decision making considering the social contribution

  5. Cloud Computing in Support of Applied Learning: A Baseline Study of Infrastructure Design at Southern Polytechnic State University

    Science.gov (United States)

    Conn, Samuel S.; Reichgelt, Han

    2013-01-01

    Cloud computing represents an architecture and paradigm of computing designed to deliver infrastructure, platforms, and software as constructible computing resources on demand to networked users. As campuses are challenged to better accommodate academic needs for applications and computing environments, cloud computing can provide an accommodating…

  6. KubeNow: A Cloud Agnostic Platform for Microservice-Oriented Applications

    OpenAIRE

    Capuccini, Marco; Larsson, Anders; Toor, Salman; Spjuth, Ola

    2018-01-01

    KubeNow is a platform for rapid and continuous deployment of microservice-based applications over cloud infrastructure. Within the field of software engineering, the microservice-based architecture is a methodology in which complex applications are divided into smaller, more narrow services. These services are independently deployable and compatible with each other like building blocks. These blocks can be combined in multiple ways, according to specific use cases. Microservices are designed ...

  7. Interoperability of remote handling control system software modules at Divertor Test Platform 2 using middleware

    Energy Technology Data Exchange (ETDEWEB)

    Tuominen, Janne, E-mail: janne.m.tuominen@tut.fi [Tampere University of Technology, Department of Intelligent Hydraulics and Automation, Tampere (Finland); Rasi, Teemu; Mattila, Jouni [Tampere University of Technology, Department of Intelligent Hydraulics and Automation, Tampere (Finland); Siuko, Mikko [VTT, Technical Research Centre of Finland, Tampere (Finland); Esque, Salvador [F4E, Fusion for Energy, Torres Diagonal Litoral B3, Josep Pla2, 08019, Barcelona (Spain); Hamilton, David [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France)

    2013-10-15

    Highlights: ► The prototype DTP2 remote handling control system is a heterogeneous collection of subsystems, each realizing a functional area of responsibility. ► Middleware provides well-known, reusable solutions to problems, such as heterogeneity, interoperability, security and dependability. ► A middleware solution was selected and integrated with the DTP2 RH control system. The middleware was successfully used to integrate all relevant subsystems and functionality was demonstrated. -- Abstract: This paper focuses on the inter-subsystem communication channels in a prototype distributed remote handling control system at Divertor Test Platform 2 (DTP2). The subsystems are responsible for specific tasks and, over the years, their development has been carried out using various platforms and programming languages. The communication channels between subsystems have different priorities, e.g. very high messaging rate and deterministic timing or high reliability in terms of individual messages. Generally, a control system's communication infrastructure should provide interoperability, scalability, performance and maintainability. An attractive approach to accomplish this is to use a standardized and proven middleware implementation. The selection of a middleware can have a major cost impact in future integration efforts. In this paper we present development done at DTP2 using the Object Management Group's (OMG) standard specification for Data Distribution Service (DDS) for ensuring communications interoperability. DDS has gained a stable foothold especially in the military field. It lacks a centralized broker, thereby avoiding a single-point-of-failure. It also includes an extensive set of Quality of Service (QoS) policies. The standard defines a platform- and programming language independent model and an interoperability wire protocol that enables DDS vendor interoperability, allowing software developers to avoid vendor lock-in situations.

  8. Information system for administrating and distributing color images through internet

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available The information system for administrating and distributing color images through the Internet ensures the consistent replication of color images, their storage - in an on-line data base - and predictable distribution, by means of a digitally distributed flow, based on Windows platform and POD (Print On Demand technology. The consistent replication of color images inde-pendently from the parameters of the processing equipment and from the features of the programs composing the technological flow, is ensured by the standard color management sys-tem defined by ICC (International Color Consortium, which is integrated by the Windows operation system and by the POD technology. The latter minimize the noticeable differences between the colors captured, displayed or printed by various replication equipments and/or edited by various graphical applications. The system integrated web application ensures the uploading of the color images in an on-line database and their administration and distribution among the users via the Internet. For the preservation of the data expressed by the color im-ages during their transfer along a digitally distributed flow, the software application includes an original tool ensuring the accurate replication of colors on computer displays or when printing them by means of various color printers or presses. For development and use, this application employs a hardware platform based on PC support and a competitive software platform, based on: the Windows operation system, the .NET. Development medium and the C# programming language. This information system is beneficial for creators and users of color images, the success of the printed or on-line (Internet publications depending on the sizeable, predictable and accurate replication of colors employed for the visual expression of information in every activity fields of the modern society. The herein introduced information system enables all interested persons to access the

  9. gProcess and ESIP Platforms for Satellite Imagery Processing over the Grid

    Science.gov (United States)

    Bacu, Victor; Gorgan, Dorian; Rodila, Denisa; Pop, Florin; Neagu, Gabriel; Petcu, Dana

    2010-05-01

    The Environment oriented Satellite Data Processing Platform (ESIP) is developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) co-funded by the European Commission through FP7 [1]. The gProcess Platform [2] is a set of tools and services supporting the development and the execution over the Grid of the workflow based processing, and particularly the satelite imagery processing. The ESIP [3], [4] is build on top of the gProcess platform by adding a set of satellite image processing software modules and meteorological algorithms. The satellite images can reveal and supply important information on earth surface parameters, climate data, pollution level, weather conditions that can be used in different research areas. Generally, the processing algorithms of the satellite images can be decomposed in a set of modules that forms a graph representation of the processing workflow. Two types of workflows can be defined in the gProcess platform: abstract workflow (PDG - Process Description Graph), in which the user defines conceptually the algorithm, and instantiated workflow (iPDG - instantiated PDG), which is the mapping of the PDG pattern on particular satellite image and meteorological data [5]. The gProcess platform allows the definition of complex workflows by combining data resources, operators, services and sub-graphs. The gProcess platform is developed for the gLite middleware that is available in EGEE and SEE-GRID infrastructures [6]. gProcess exposes the specific functionality through web services [7]. The Editor Web Service retrieves information on available resources that are used to develop complex workflows (available operators, sub-graphs, services, supported resources, etc.). The Manager Web Service deals with resources management (uploading new resources such as workflows, operators, services, data, etc.) and in addition retrieves information on workflows. The Executor Web Service manages the execution of the instantiated workflows

  10. An Open Source Software and Web-GIS Based Platform for Airborne SAR Remote Sensing Data Management, Distribution and Sharing

    Science.gov (United States)

    Changyong, Dou; Huadong, Guo; Chunming, Han; Ming, Liu

    2014-03-01

    With more and more Earth observation data available to the community, how to manage and sharing these valuable remote sensing datasets is becoming an urgent issue to be solved. The web based Geographical Information Systems (GIS) technology provides a convenient way for the users in different locations to share and make use of the same dataset. In order to efficiently use the airborne Synthetic Aperture Radar (SAR) remote sensing data acquired in the Airborne Remote Sensing Center of the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS), a Web-GIS based platform for airborne SAR data management, distribution and sharing was designed and developed. The major features of the system include map based navigation search interface, full resolution imagery shown overlaid the map, and all the software adopted in the platform are Open Source Software (OSS). The functions of the platform include browsing the imagery on the map navigation based interface, ordering and downloading data online, image dataset and user management, etc. At present, the system is under testing in RADI and will come to regular operation soon.

  11. Effects of hypothetical improvised nuclear detonation on the electrical infrastructure

    International Nuclear Information System (INIS)

    Barrett, Christopher L.; Eubank, Stephen; Evrenosoglu, C. Yaman; Marathe, Achla; Marathe, Madhav V.; Phadke, Arun; Thorp, James; Vullikanti, Anil

    2013-01-01

    We study the impacts of a hypothetical improvised nuclear detonation (IND) on the electrical infrastructure and its cascading effects on other urban inter-dependent infrastructures of a major metropolitan area in the US. We synthesize open source information, expert knowledge, commercial software and Google Earth data to derive a realistic electrical transmission and distribution network spanning the region. A dynamic analysis of the geo-located grid is carried out to determine the cause of malfunction of components, and their short-term and long-term effect on the stability of the grid. Finally a detailed estimate of the cost of damage to the major components of the infrastructure is provided.

  12. Effects of hypothetical improvised nuclear detonation on the electrical infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, Christopher L.; Eubank, Stephen; Evrenosoglu, C. Yaman; Marathe, Achla; Marathe, Madhav V.; Phadke, Arun; Thorp, James; Vullikanti, Anil [Virginia Tech, Blacksburg, VA (United States). Network Dynamics and Simulation Science Lab.

    2013-07-01

    We study the impacts of a hypothetical improvised nuclear detonation (IND) on the electrical infrastructure and its cascading effects on other urban inter-dependent infrastructures of a major metropolitan area in the US. We synthesize open source information, expert knowledge, commercial software and Google Earth data to derive a realistic electrical transmission and distribution network spanning the region. A dynamic analysis of the geo-located grid is carried out to determine the cause of malfunction of components, and their short-term and long-term effect on the stability of the grid. Finally a detailed estimate of the cost of damage to the major components of the infrastructure is provided.

  13. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    CERN Document Server

    Magnoni, L; Cordeiro, C; Georgiou, M; Andreeva, J; Khan, A; Smith, D R

    2015-01-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities mon...

  14. Intelligent Tools for Building a Scientific Information Platform

    CERN Document Server

    Skonieczny, Lukasz; Rybiński, Henryk; Niezgodka, Marek

    2012-01-01

    This book is a selection of results obtained within one year of research performed under SYNAT - a nation-wide scientific project aiming to create an infrastructure for scientific content storage and sharing for academia, education and open knowledge society in Poland. The selection refers to the research in artificial intelligence, knowledge discovery and data mining, information retrieval and natural language processing, addressing the problems of implementing intelligent tools for building a scientific information platform. The idea of this book is based on the very successful SYNAT Project Conference and the SYNAT Workshop accompanying the 19th International Symposium on Methodologies for Intelligent Systems (ISMIS 2011). The papers included in this book present an overview and insight into such topics as architecture of scientific information platforms, semantic clustering, ontology-based systems, as well as, multimedia data processing.

  15. Robust uncertainty evaluation for system identification on distributed wireless platforms

    Science.gov (United States)

    Crinière, Antoine; Döhler, Michael; Le Cam, Vincent; Mevel, Laurent

    2016-04-01

    Health monitoring of civil structures by system identification procedures from automatic control is now accepted as a valid approach. These methods provide frequencies and modeshapes from the structure over time. For a continuous monitoring the excitation of a structure is usually ambient, thus unknown and assumed to be noise. Hence, all estimates from the vibration measurements are realizations of random variables with inherent uncertainty due to (unknown) process and measurement noise and finite data length. The underlying algorithms are usually running under Matlab under the assumption of large memory pool and considerable computational power. Even under these premises, computational and memory usage are heavy and not realistic for being embedded in on-site sensor platforms such as the PEGASE platform. Moreover, the current push for distributed wireless systems calls for algorithmic adaptation for lowering data exchanges and maximizing local processing. Finally, the recent breakthrough in system identification allows us to process both frequency information and its related uncertainty together from one and only one data sequence, at the expense of computational and memory explosion that require even more careful attention than before. The current approach will focus on presenting a system identification procedure called multi-setup subspace identification that allows to process both frequencies and their related variances from a set of interconnected wireless systems with all computation running locally within the limited memory pool of each system before being merged on a host supervisor. Careful attention will be given to data exchanges and I/O satisfying OGC standards, as well as minimizing memory footprints and maximizing computational efficiency. Those systems are built in a way of autonomous operations on field and could be later included in a wide distributed architecture such as the Cloud2SM project. The usefulness of these strategies is illustrated on

  16. WindS@UP: The e-Science Platform for WindScanner.eu

    International Nuclear Information System (INIS)

    Gomes, Filipe; Palma, José Laginha; Lopes, João Correia; Ribeiro, Luís Frölén

    2014-01-01

    The WindScanner e-Science platform architecture and the underlying premises are discussed. It is a collaborative platform that will provide a repository for experimental data and metadata. Additional data processing capabilities will be incorporated thus enabling in-situ data processing. Every resource in the platform is identified by a Uniform Resource Identifier (URI), enabling an unequivocally identification of the field(s) campaign(s) data sets and metadata associated with the data set or experience. This feature will allow the validation of field experiment results and conclusions as all managed resources will be linked. A centralised node (Hub) will aggregate the contributions of 6 to 8 local nodes from EC countries and will manage the access of 3 types of users: data-curator, data provider and researcher. This architecture was designed to ensure consistent and efficient research data access and preservation, and exploitation of new research opportunities provided by having this C ollaborative Data Infrastructure . The prototype platform-WindS@UP-enables the usage of the platform by humans via a Web interface or by machines using an internal API (Application Programming Interface). Future work will improve the vocabulary ( a pplication profile ) used to describe the resources managed by the platform

  17. Bike Infrastructures

    DEFF Research Database (Denmark)

    Silva, Victor; Harder, Henrik; Jensen, Ole B.

    Bike Infrastructures aims to identify bicycle infrastructure typologies and design elements that can help promote cycling significantly. It is structured as a case study based research where three cycling infrastructures with distinct typologies were analyzed and compared. The three cases......, the findings of this research project can also support bike friendly design and planning, and cyclist advocacy....

  18. Real-Time Optimization and Control of Next-Generation Distribution

    Science.gov (United States)

    -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution developing a system-theoretic distribution network management framework that unifies real-time voltage and Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next

  19. Engineering economics and finance for transportation infrastructure

    CERN Document Server

    Prassas, Elena S

    2013-01-01

    Transportation infrastructure is often referred to as society’s bloodstream.  It allows for the movement of people and goods to provide the ability to optimize the production and distribution of goods in an effective and efficient manner, and to provide personal opportunities for employment, recreation, education, health care, and other vital activities.   At the same time, the costs to provide, maintain, and operate this complex infrastructure are enormous.  Because so much of the economic resources to be invested come from public funds, it is critical that expenditures are made in a manner that provides society with the best possible return on the investment.  Further, it is important that sufficient investment is made available, and the costs of the investment are equitably borne by taxpayers.   This textbook provides a fundamental overview of the application of engineering economic principles to transportation infrastructure investments.  Basic theory is presented and illustrated with examples spe...

  20. A threat analysis framework as applied to critical infrastructures in the Energy Sector.

    Energy Technology Data Exchange (ETDEWEB)

    Michalski, John T.; Duggan, David Patrick

    2007-09-01

    The need to protect national critical infrastructure has led to the development of a threat analysis framework. The threat analysis framework can be used to identify the elements required to quantify threats against critical infrastructure assets and provide a means of distributing actionable threat information to critical infrastructure entities for the protection of infrastructure assets. This document identifies and describes five key elements needed to perform a comprehensive analysis of threat: the identification of an adversary, the development of generic threat profiles, the identification of generic attack paths, the discovery of adversary intent, and the identification of mitigation strategies.

  1. Executable research compendia in geoscience research infrastructures

    Science.gov (United States)

    Nüst, Daniel

    2017-04-01

    From generation through analysis and collaboration to communication, scientific research requires the right tools. Scientists create their own software using third party libraries and platforms. Cloud computing, Open Science, public data infrastructures, and Open Source enable scientists with unprecedented opportunites, nowadays often in a field "Computational X" (e.g. computational seismology) or X-informatics (e.g. geoinformatics) [0]. This increases complexity and generates more innovation, e.g. Environmental Research Infrastructures (environmental RIs [1]). Researchers in Computational X write their software relying on both source code (e.g. from https://github.com) and binary libraries (e.g. from package managers such as APT, https://wiki.debian.org/Apt, or CRAN, https://cran.r-project.org/). They download data from domain specific (cf. https://re3data.org) or generic (e.g. https://zenodo.org) data repositories, and deploy computations remotely (e.g. European Open Science Cloud). The results themselves are archived, given persistent identifiers, connected to other works (e.g. using https://orcid.org/), and listed in metadata catalogues. A single researcher, intentionally or not, interacts with all sub-systems of RIs: data acquisition, data access, data processing, data curation, and community support [3]. To preserve computational research [3] proposes the Executable Research Compendium (ERC), a container format closing the gap of dependency preservation by encapsulating the runtime environment. ERCs and RIs can be integrated for different uses: (i) Coherence: ERC services validate completeness, integrity and results (ii) Metadata: ERCs connect the different parts of a piece of research and faciliate discovery (iii) Exchange and Preservation: ERC as usable building blocks are the shared and archived entity (iv) Self-consistency: ERCs remove dependence on ephemeral sources (v) Execution: ERC services create and execute a packaged analysis but integrate with

  2. Distributed One Time Password Infrastructure for Linux Environments

    Directory of Open Access Journals (Sweden)

    Alberto Benito Peral

    2018-04-01

    Full Text Available Nowadays, there is a lot of critical information and services hosted on computer systems. The proper access control to these resources is essential to avoid malicious actions that could cause huge losses to home and professional users. The access control systems have evolved from the first password based systems to the modern mechanisms using smart cards, certificates, tokens, biometric systems, etc. However, when designing a system, it is necessary to take into account their particular limitations, such as connectivity, infrastructure or budget. In addition, one of the main objectives must be to ensure the system usability, but this property is usually orthogonal to the security. Thus, the use of password is still common. In this paper, we expose a new password based access control system that aims to improve password security with the minimum impact in the system usability.

  3. Activity-Driven Computing Infrastructure - Pervasive Computing in Healthcare

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Olesen, Anders Konring

    In many work settings, and especially in healthcare, work is distributed among many cooperating actors, who are constantly moving around and are frequently interrupted. In line with other researchers, we use the term pervasive computing to describe a computing infrastructure that supports work...

  4. Enabling European Archaeological Research: The ARIADNE E-Infrastructure

    NARCIS (Netherlands)

    Hollander, H.S.; Aloia, Nicola; Binding, Ceri; Cuy, Sebastian; Doerr, Martin; Fanini, Bruno; Felicetti, Achille; Fihn, Johan; Gavrilis, Dimitris; Geser, Guntram; Meghini, Carlo; Niccolucci, Franco; Nurra, Federico; Papatheodorou, Christos; Richards, Julian; Ronzino, Paola; Scopigno, Roberto; Theodoridou, Maria; Theodoridou, Maria; Tudhope, Douglas; Vlachidis, Andreas; Wright, Holly

    2017-01-01

    Research e-infrastructures, digital archives and data services have become important pillars of scientific enterprise that in recent decades has become ever more collaborative, distributed and data-intensive. The archaeological research community has been an early adopter of digital tools for data

  5. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    Science.gov (United States)

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  6. The CTTC 5G End-to-End Experimental Platform : Integrating Heterogeneous Wireless/Optical Networks, Distributed Cloud, and IoT Devices

    OpenAIRE

    Munoz, Raul; Mangues-Bafalluy, Josep; Vilalta, Ricard; Verikoukis, Christos; Alonso-Zarate, Jesus; Bartzoudis, Nikolaos; Georgiadis, Apostolos; Payaro, Miquel; Perez-Neira, Ana; Casellas, Ramon; Martinez, Ricardo; Nunez-Martinez, Jose; Requena Esteso, Manuel; Pubill, David; Font-Bach, Oriol

    2016-01-01

    The Internet of Things (IoT) will facilitate a wide variety of applications in different domains, such as smart cities, smart grids, industrial automation (Industry 4.0), smart driving, assistance of the elderly, and home automation. Billions of heterogeneous smart devices with different application requirements will be connected to the networks and will generate huge aggregated volumes of data that will be processed in distributed cloud infrastructures. On the other hand, there is also a gen...

  7. Distribution transformer lifetime analysis in the presence of demand response and rooftop PV integration

    Directory of Open Access Journals (Sweden)

    Behi Behnaz

    2017-01-01

    Full Text Available Many distribution transformers have already exceeded half of their expected service life of 35 years in the infrastructure of Western Power, the electric distribution company supplying southwest of Western Australia, Australia. Therefore, it is anticipated that a high investment on transformer replacement happens in the near future. However, high renewable integration and demand response (DR are promising resources to defer the investment on infrastructure upgrade and extend the lifetime of transformers. This paper investigates the impact of rooftop photovoltaic (PV integration and customer engagement through DR on the lifetime of transformers in electric distribution networks. To this aim, first, a time series modelling of load, DR and PV is utilised for each year over a planning period. This load model is applied to a typical distribution transformer for which the hot-spot temperature rise is modelled based on the relevant standard. Using this calculation platform, the loss of life and the actual age of distribution transformer are obtained. Then, various scenarios including different levels of PV penetration and DR contribution are examined, and their impacts on the age of transformer are reported. Finally, the equivalent loss of net present value of distribution transformer is formulated and discussed. This formulation gives major benefits to the distribution network planners for analysing the contribution of PV and DR on lifetime extension of the distribution transformer. In addition, the provided model can be utilised in optimal investment analysis to find the best time for the transformer replacement and the associated cost considering PV penetration and DR. The simulation results show that integration of PV and DR within a feeder can significantly extend the lifetime of transformers.

  8. An IHE-conform telecooperation platform supporting the treatment of dementia patients

    Directory of Open Access Journals (Sweden)

    Saleh K.

    2015-09-01

    Full Text Available Ensuring medical support of patients of advanced age in rural areas is a major challenge. Moreover, the number of registered doctors—medical specialists in particular—will decrease in such areas over the next years. These unmet medical needs in combination with communication deficiencies among different types of health-care professionals pose threats to the quality of patient treatment. This work presents a novel solution combining telemedicine, telecooperation, and IHE profiles to tackle these challenges. We present a telecooperation platform that supports longitudinal electronic patient records and allows for intersectoral cooperation based on shared electronic medication charts and other documents. Furthermore, the conceived platform allows for an integration into the planned German telematics infrastructure.

  9. A layered and distributed approach to platform systems control

    NARCIS (Netherlands)

    Neef, R.M.; Lieburg, A. van; Gosliga, S.P. van; Gillis, M.P.W.

    2003-01-01

    With the increasing complexity of platform systems and the ongoing demand for reduced manning comes the need for novel approaches to ship control systems. Current control and management systems fall short on maintainability, robustness and scalability. From the operator's perspective information

  10. Research infrastructures of pan-European interest: The EU and Global issues

    Energy Technology Data Exchange (ETDEWEB)

    Pero, Herve, E-mail: Herve.Pero@ec.europa.e [' Research Infrastructures' Unit, DG Research, European Commission, Brussels (Belgium)

    2011-01-21

    Research Infrastructures act as 'knowledge industries' for the society and as a source of attraction for world scientists. At European level, the long-term objective is to support an efficient and world-class eco-system of Research Infrastructures, encompassing not only the large single-site facilities but also distributed research infrastructures, based on a network of 'regional partner facilities', with strong links with world-class universities and centres of excellence. The EC support activities help to promote the development of this fabric of research infrastructures of the highest quality and performance in Europe. Since 2002 ESFRI is also aimed at supporting a coherent approach to policy-making on research infrastructures. The European Roadmap for Research Infrastructures is ESFRI's most significant achievement to date, and KM3Net is one of its identified projects. The current Community support to the Preparatory Phase of this project aims at solving mainly governance, financial, organisational and legal issues. How should KM3Net help contributing to an efficient Research Infrastructure eco-system? This is the question to which the KM3Net stakeholders need to be able to answer very soon!

  11. Research infrastructures of pan-European interest: The EU and Global issues

    International Nuclear Information System (INIS)

    Pero, Herve

    2011-01-01

    Research Infrastructures act as 'knowledge industries' for the society and as a source of attraction for world scientists. At European level, the long-term objective is to support an efficient and world-class eco-system of Research Infrastructures, encompassing not only the large single-site facilities but also distributed research infrastructures, based on a network of 'regional partner facilities', with strong links with world-class universities and centres of excellence. The EC support activities help to promote the development of this fabric of research infrastructures of the highest quality and performance in Europe. Since 2002 ESFRI is also aimed at supporting a coherent approach to policy-making on research infrastructures. The European Roadmap for Research Infrastructures is ESFRI's most significant achievement to date, and KM3Net is one of its identified projects. The current Community support to the Preparatory Phase of this project aims at solving mainly governance, financial, organisational and legal issues. How should KM3Net help contributing to an efficient Research Infrastructure eco-system? This is the question to which the KM3Net stakeholders need to be able to answer very soon!

  12. Research infrastructures of pan-European interest: The EU and Global issues

    Science.gov (United States)

    Pero, Hervé

    2011-01-01

    Research Infrastructures act as “knowledge industries” for the society and as a source of attraction for world scientists. At European level, the long-term objective is to support an efficient and world-class eco-system of Research Infrastructures, encompassing not only the large single-site facilities but also distributed research infrastructures, based on a network of “regional partner facilities”, with strong links with world-class universities and centres of excellence. The EC support activities help to promote the development of this fabric of research infrastructures of the highest quality and performance in Europe. Since 2002 ESFRI is also aimed at supporting a coherent approach to policy-making on research infrastructures. The European Roadmap for Research Infrastructures is ESFRI's most significant achievement to date, and KM3Net is one of its identified projects. The current Community support to the Preparatory Phase of this project aims at solving mainly governance, financial, organisational and legal issues. How should KM3Net help contributing to an efficient Research Infrastructure eco-system? This is the question to which the KM3Net stakeholders need to be able to answer very soon!

  13. Advanced Metering Infrastructure based on Smart Meters

    Science.gov (United States)

    Suzuki, Hiroshi

    By specifically designating penetrations rates of advanced meters and communication technologies, devices and systems, this paper introduces that the penetration of advanced metering is important for the future development of electric power system infrastructure. It examines the state of the technology and the economical benefits of advanced metering. One result of the survey is that advanced metering currently has a penetration of about six percent of total installed electric meters in the United States. Applications to the infrastructure differ by type of organization. Being integrated with emerging communication technologies, smart meters enable several kinds of features such as, not only automatic meter reading but also distribution management control, outage management, remote switching, etc.

  14. Distributed Resource Energy Analysis and Management System (DREAMS) Development for Real-time Grid Operations

    Energy Technology Data Exchange (ETDEWEB)

    Nakafuji, Dora [Hawaiian Electric Company, Honululu, HI (United States); Gouveia, Lauren [Hawaiian Electric Company, Honululu, HI (United States)

    2016-10-24

    This project supports development of the next generation, integrated energy management infrastructure (EMS) able to incorporate advance visualization of behind-the-meter distributed resource information and probabilistic renewable energy generation forecasts to inform real-time operational decisions. The project involves end-users and active feedback from an Utility Advisory Team (UAT) to help inform how information can be used to enhance operational functions (e.g. unit commitment, load forecasting, Automatic Generation Control (AGC) reserve monitoring, ramp alerts) within two major EMS platforms. Objectives include: Engaging utility operations personnel to develop user input on displays, set expectations, test and review; Developing ease of use and timeliness metrics for measuring enhancements; Developing prototype integrated capabilities within two operational EMS environments; Demonstrating an integrated decision analysis platform with real-time wind and solar forecasting information and timely distributed resource information; Seamlessly integrating new 4-dimensional information into operations without increasing workload and complexities; Developing sufficient analytics to inform and confidently transform and adopt new operating practices and procedures; Disseminating project lessons learned through industry sponsored workshops and conferences;Building on collaborative utility-vendor partnership and industry capabilities

  15. Comparison between beta radiation dose distribution due to LDR and HDR ocular brachytherapy applicators using GATE Monte Carlo platform.

    Science.gov (United States)

    Mostafa, Laoues; Rachid, Khelifi; Ahmed, Sidi Moussa

    2016-08-01

    Eye applicators with 90Sr/90Y and 106Ru/106Rh beta-ray sources are generally used in brachytherapy for the treatment of eye diseases as uveal melanoma. Whenever, radiation is used in treatment, dosimetry is essential. However, knowledge of the exact dose distribution is a critical decision-making to the outcome of the treatment. The Monte Carlo technique provides a powerful tool for calculation of the dose and dose distributions which helps to predict and determine the doses from different shapes of various types of eye applicators more accurately. The aim of this work consisted in using the Monte Carlo GATE platform to calculate the 3D dose distribution on a mathematical model of the human eye according to international recommendations. Mathematical models were developed for four ophthalmic applicators, two HDR 90Sr applicators SIA.20 and SIA.6, and two LDR 106Ru applicators, a concave CCB model and a flat CCB model. In present work, considering a heterogeneous eye phantom and the chosen tumor, obtained results with the use of GATE for mean doses distributions in a phantom and according to international recommendations show a discrepancy with respect to those specified by the manufacturers. The QC of dosimetric parameters shows that contrarily to the other applicators, the SIA.20 applicator is consistent with recommendations. The GATE platform show that the SIA.20 applicator present better results, namely the dose delivered to critical structures were lower compared to those obtained for the other applicators, and the SIA.6 applicator, simulated with MCNPX generates higher lens doses than those generated by GATE. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. Structural and Infrastructural Underpinnings of International R&D Networks

    DEFF Research Database (Denmark)

    Niang, Mohamed; Sørensen, Brian Vejrum

    2009-01-01

    This paper explores the process of globally distributing R&D activities with an emphasis on the effects of network maturity. It discusses emerging configurations by asking how the structure and infrastructure of international R&D networks evolve along with the move from a strong R&D center...... to dispersed development. Drawing from case studies of two international R&D networks, it presents a capability maturity model and argues that understanding the interaction between new structures and infrastructures of the dispersed networks has become a key requirement for developing organizational...

  17. EMSODEV and EPOS-IP: key findings for effective management of EU research infrastructure projects

    Science.gov (United States)

    Materia, Paola; Bozzoli, Sabrina; Beranzoli, Laura; Cocco, Massimo; Favali, Paolo; Freda, Carmela; Sangianantoni, Agata

    2017-04-01

    EMSO (European Multidisciplinary Seafloor and water-column Observatory, http://www.emso-eu.org) and EPOS (European Plate Observing System, https://www.epos-ip.org) are pan-European Research Infrastructures (RIs) in the ESFRI 2016 Roadmap. EMSO has recently become an ERIC (European Research Infrastructure Consortium), whilst EPOS application is in progress. Both ERICs will be hosted in Italy and the "Representing Entity" is INGV. EMSO consists of oceanic environment observation systems spanning from the Arctic through the Atlantic and Mediterranean, to the Black Sea for long-term, high-resolution, real-time monitoring of natural and man-induced processes such as hazards, climate, and marine ecosystems changes to study their evolution and interconnections. EPOS aims at creating a pan-European infrastructure for solid Earth science to support a safe and sustainable society. EPOS will enable innovative multidisciplinary research for a better understanding of Earth's physical and chemical processes controlling earthquakes, volcanic eruptions, ground instability, tsunami, and all those processes driving tectonics and Earth's surface dynamics. Following the conclusion of their Preparatory Phases the two RIs are now in their Implementation Phase still supported by the EC through the EMSODEV and EPOS-IP projects, both run by dedicated Project Management Offices at INGV with sound experience in EU projects. EMSODEV (H2020 project, 2015-2018) involves 11 partners and 9 associate partners and aims at improving the harmonization among the EMSO ERIC observation systems through the realization of EMSO Generic Instrument Modules (EGIMs), and a Data Management Platform (DMP) to implement interoperability and standardization. The DMP will provide access to data from all EMSO nodes, providing a unified, homogeneous, infrastructure-scale and user-oriented platform integrated with the increased measurement capabilities and functions provided by the EGIMs. EPOS IP (H2020 project, 2015

  18. The obstacles for the investments in natural gas distribution infrastructure in Brazil; Os obstaculos aos investimentos na rede de distribuicao de gas natural no Brasil

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Edmar Luiz Fagundes de; Bueno, Salua Saud; Selles, Vitor [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Inst. de Economia

    2004-07-01

    This paper analyses the main obstacles for the expansion of the Brazilian gas distribution pipelines infrastructure. The paper examines the evolution of investments in the gas chain and highlights the existence of an important unbalance between the level of investments in the upstream and transportation segments and the level of investment in the distribution network. It is clear that the level of investments in the distribution segment is not following the pace of expansion of the other segments. Given this conclusion, the paper examines the potential for increasing the level of investment in the distribution segment by augmenting the debt level of distribution companies. By analyzing the main distribution companies' financial statements, the paper shows that they there is room for an expansion in investments through financial leverage. Finally, the paper examines the main financing obstacles that impede the companies to increase their investment level. (author)

  19. Digital divide, biometeorological data infrastructures and human vulnerability definition

    Science.gov (United States)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2018-05-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  20. Digital divide, biometeorological data infrastructures and human vulnerability definition

    Science.gov (United States)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2017-06-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  1. Digital divide, biometeorological data infrastructures and human vulnerability definition.

    Science.gov (United States)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2018-05-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  2. Online catalog access and distribution of remotely sensed information

    Science.gov (United States)

    Lutton, Stephen M.

    1997-09-01

    Remote sensing is providing voluminous data and value added information products. Electronic sensors, communication electronics, computer software, hardware, and network communications technology have matured to the point where a distributed infrastructure for remotely sensed information is a reality. The amount of remotely sensed data and information is making distributed infrastructure almost a necessity. This infrastructure provides data collection, archiving, cataloging, browsing, processing, and viewing for applications from scientific research to economic, legal, and national security decision making. The remote sensing field is entering a new exciting stage of commercial growth and expansion into the mainstream of government and business decision making. This paper overviews this new distributed infrastructure and then focuses on describing a software system for on-line catalog access and distribution of remotely sensed information.

  3. Joint industry planning platforms for coal export supply chains

    Energy Technology Data Exchange (ETDEWEB)

    Bridges, R.; Buckley, N.; Goodall, J.; Seeley, D. [InterDynamics Pty. Ltd. (Australia)

    1998-12-31

    Improving the performance and reducing the costs associated with export logistics chain is critical to the competitiveness of export coal mines. The fundamental practices associated with the use of export logistics chains made up of mine, trucking, rail and port operations are being challenged by the advent of third party operators on rail systems and the use of the Internet. Whilst individual mines can improve their processes to drive down their mining costs, they face major challenges in their endeavour to improve the performance of export logistics chains and reduce the significant logistics costs of moving coal from the mine to export ships, via the shared infrastructure of rail systems and ports. There is an increasing realisation that global competition is not only between mines but between coal export regions that are defined by their rail system and ports infrastructure. The development and use of a joint industry planning platform for the export logistics chains of the Western Australian Grain Industry has demonstrated that an industry facing significant restructuring and increased competitiveness can achieve major throughput and cost reduction gains when stakeholders in export logistics chains share key planning information using the Internet and state of the art planning tools. Joint industry planning platforms for export logistics chains are being considered or are at initial stages of development for a number of Australasian coal export logistics chains. This paper addresses the key components of joint industry planning platforms, the key information that should be shared, the use of the internet and information servers, and the contractual structures required to enable stakeholders of an export logistics chain, who are competitors or potential competitors, to work together to improve the competitiveness of a coal export region. 4 refs., 6 figs.

  4. Prioritising transport infrastructure projects: Towards a multi-criterion ...

    African Journals Online (AJOL)

    Southern African Business Review ... systematic framework for the appraisal of transport infrastructure projects of the type 'budget cycle projects with local ... Cost/benefit analysis, when applied in a classic sense, is not suitable for this purpose, given its ... (optimal allocation of resources), equity (impact distribution aspects),

  5. Temperature profiles from MBT casts from a World-Wide distribution from MULTIPLE PLATFORMS from 1948-04-08 to 1968-12-14 (NODC Accession 9300131)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature profile data were collected from MBT casts from a World-Wide distribution. Data were collected from MULTIPLE PLATFORMS from 08 April 1948 to 14 Decmeber...

  6. Towards a Market Entry Framework for Digital Payment Platforms

    DEFF Research Database (Denmark)

    Kazan, Erol; Damsgaard, Jan

    2016-01-01

    This study presents a framework to understand and explain the design and configuration of digital payment platforms and how these platforms create conditions for market entries. By embracing the theoretical lens of platform envelopment, we employed a multiple and comparative-case study...... in a European setting by using our framework as an analytical lens to assess market-entry conditions. We found that digital payment platforms have acquired market entry capabilities, which is achieved through strategic platform design (i.e., platform development and service distribution) and technology design...... (i.e., issuing evolutionary and revolutionary payment instruments). The studied cases reveal that digital platforms leverage payment services as a mean to bridge and converge core and adjacent platform markets. In so doing, platform envelopment strengthens firms’ market position in their respective...

  7. Nuclear Energy Infrastructure Database Description and User’s Manual

    Energy Technology Data Exchange (ETDEWEB)

    Heidrich, Brenden [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from a variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.

  8. Damage assessment of bridge infrastructure subjected to flood-related hazards

    Science.gov (United States)

    Michalis, Panagiotis; Cahill, Paul; Bekić, Damir; Kerin, Igor; Pakrashi, Vikram; Lapthorne, John; Morais, João Gonçalo Martins Paulo; McKeogh, Eamon

    2017-04-01

    Transportation assets represent a critical component of society's infrastructure systems. Flood-related hazards are considered one of the main climate change impacts on highway and railway infrastructure, threatening the security and functionality of transportation systems. Of such hazards, flood-induced scour is a primarily cause of bridge collapses worldwide and one of the most complex and challenging water flow and erosion phenomena, leading to structural instability and ultimately catastrophic failures. Evaluation of scour risk under severe flood events is a particularly challenging issue considering that depth of foundations is very difficult to evaluate in water environment. The continual inspection, assessment and maintenance of bridges and other hydraulic structures under extreme flood events requires a multidisciplinary approach, including knowledge and expertise of hydraulics, hydrology, structural engineering, geotechnics and infrastructure management. The large number of bridges under a single management unit also highlights the need for efficient management, information sharing and self-informing systems to provide reliable, cost-effective flood and scour risk management. The "Intelligent Bridge Assessment Maintenance and Management System" (BRIDGE SMS) is an EU/FP7 funded project which aims to couple state-of-the art scientific expertise in multidisciplinary engineering sectors with industrial knowledge in infrastructure management. This involves the application of integrated low-cost structural health monitoring systems to provide real-time information towards the development of an intelligent decision support tool and a web-based platform to assess and efficiently manage bridge assets. This study documents the technological experience and presents results obtained from the application of sensing systems focusing on the damage assessment of water-hazards at bridges over watercourses in Ireland. The applied instrumentation is interfaced with an open

  9. Hydrogen infrastructure development in The Netherlands

    International Nuclear Information System (INIS)

    Smit, R.; Weeda, M.; De Groot, A.

    2007-08-01

    Increasingly people think of how a hydrogen energy supply system would look like, and how to build and end up at such a system. This paper presents the work on modelling and simulation of current ideas among Dutch hydrogen stakeholders for a transition towards the widespread use of a hydrogen energy. Based mainly on economic considerations, the ideas about a transition seem viable. It appears that following the introduction of hydrogen in niche applications, the use of locally produced hydrogen from natural gas in stationary and mobile applications can yield an economic advantage when compared to the conventional system, and can thus generate a demand for hydrogen. The demand for hydrogen can develop to such an extent that the construction of a large-scale hydrogen pipeline infrastructure for the transport and distribution of hydrogen produced in large-scale production facilities becomes economically viable. In 2050, the economic viability of a large-scale hydrogen pipeline infrastructure spreads over 20-25 of the 40 regions in which The Netherlands is divided for modelling purposes. Investments in hydrogen pipelines for a fully developed hydrogen infrastructure are estimated to be in the range of 12,000-20,000 million euros

  10. Temperature profiles from XBT casts from a World-Wide distribution from MULTIPLE PLATFORMS from 1979-06-03 to 1988-05-27 (NODC Accession 8800182)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature profiles were collected from XBT casts from a World-Wide distribution. Data were collected from MULTIPLE PLATFORMS from 03 June 1979 to 27 May 1988. Data...

  11. Women in EPOS: the role of women in a large pan-European Research Infrastructure for Solid Earth sciences

    Science.gov (United States)

    Calignano, Elisa; Freda, Carmela; Baracchi, Laura

    2017-04-01

    Women are outnumbered by men in geosciences senior research positions, but what is the situation if we consider large pan-European Research Infrastructures? With this contribution we want to show an analysis of the role of women in the implementation of the European Plate Observing System (EPOS): a planned research infrastructure for European Solid Earth sciences, integrating national and transnational research infrastructures to enable innovative multidisciplinary research. EPOS involves 256 national research infrastructures, 47 partners (universities and research institutes) from 25 European countries and 4 international organizations. The EPOS integrated platform demands significant coordination between diverse solid Earth disciplinary communities, national research infrastructures and the policies and initiatives they drive, geoscientists and information technologists. The EPOS architecture takes into account governance, legal, financial and technical issues and is designed so that the enterprise works as a single, but distributed, sustainable research infrastructure. A solid management structure is vital for the successful implementation and sustainability of EPOS. The internal organization relies on community-specific Working Packages (WPs), Transversal WPs in charge of the overall EPOS integration and implementation, several governing, executive and advisory bodies, a Project Management Office (PMO) and the Project Coordinator. Driven by the timely debate on gender balance and commitment of the European Commission to promote gender equality in research and innovation, we decided to conduct a mapping exercise on a project that crosses European national borders and that brings together diverse geoscience disciplines under one management structure. We present an analysis of women representation in decision-making positions in each EPOS Working Package (WP Leader, proxy, legal, financial and IT contact persons), in the Boards and Councils and in the PMO

  12. Security Services Lifecycle Management in on-demand infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.; Lopez, D.R.; García-Espín, J.A.; Qiu, J.; Zhao, G.; Rong, C.

    2010-01-01

    Modern e-Science and high technology industry require high-performance and complicated network and computer infrastructure to support distributed collaborating groups of researchers and applications that should be provisioned on-demand. The effective use and management of the dynamically provisioned

  13. Wireless intelligent network: infrastructure before services?

    Science.gov (United States)

    Chu, Narisa N.

    1996-01-01

    The Wireless Intelligent Network (WIN) intends to take advantage of the Advanced Intelligent Network (AIN) concepts and products developed from wireline communications. However, progress of the AIN deployment has been slow due to the many barriers that exist in the traditional wireline carriers' deployment procedures and infrastructure. The success of AIN has not been truly demonstrated. The AIN objectives and directions are applicable to the wireless industry although the plans and implementations could be significantly different. This paper points out WIN characteristics in architecture, flexibility, deployment, and value to customers. In order to succeed, the technology driven AIN concept has to be reinforced by the market driven WIN services. An infrastructure suitable for the WIN will contain elements that are foreign to the wireline network. The deployment process is expected to seed with the revenue generated services. Standardization will be achieved by simplifying and incorporating the IS-41C, AIN, and Intelligent Network CS-1 recommendations. Integration of the existing and future systems impose the biggest challenge of all. Service creation has to be complemented with service deployment process which heavily impact the carriers' infrastructure. WIN deployment will likely start from an Intelligent Peripheral, a Service Control Point and migrate to a Service Node when sufficient triggers are implemented in the mobile switch for distributed call control. The struggle to move forward will not be based on technology, but rather on the impact to existing infrastructure.

  14. Damage to offshore infrastructure in the Gulf of Mexico by hurricanes Katrina and Rita

    Science.gov (United States)

    Cruz, A. M.; Krausmann, E.

    2009-04-01

    The damage inflicted by hurricanes Katrina and Rita to the Gulf-of-Mexico's (GoM) oil and gas production, both onshore and offshore, has shown the proneness of industry to Natech accidents (natural hazard-triggered hazardous-materials releases). In order to contribute towards a better understanding of Natech events, we assessed the damage to and hazardous-materials releases from offshore oil and natural-gas platforms and pipelines induced by hurricanes Katrina and Rita. Data was obtained through a review of published literature and interviews with government officials and industry representatives from the affected region. We also reviewed over 60,000 records of reported hazardous-materials releases from the National Response Center's (NRC) database to identify and analyze the hazardous-materials releases directly attributed to offshore oil and gas platforms and pipelines affected by the two hurricanes. Our results show that hurricanes Katrina and Rita destroyed at least 113 platforms, and severely damaged at least 53 others. Sixty percent of the facilities destroyed were built 30 years ago or more prior to the adoption of the more stringent design standards that went into effect after 1977. The storms also destroyed 5 drilling rigs and severely damaged 19 mobile offshore drilling units (MODUs). Some 19 MODUs lost their moorings and became adrift during the storms which not only posed a danger to existing facilities but the dragging anchors also damaged pipelines and other infrastructure. Structural damage to platforms included toppling of sections, and tilting or leaning of platforms. Possible causes for failure of structural and non-structural components of platforms included loading caused by wave inundation of the deck. Failure of rigs attached to platforms was also observed resulting in significant damage to the platform or adjacent infrastructure, as well as damage to equipment, living quarters and helipads. The failures are attributable to tie-down components

  15. The EGEE user support infrastructure

    CERN Document Server

    Antoni, T; Mills, A

    2007-01-01

    User support in a grid environment is a challenging task due to the distributed nature of the grid. The variety of users and VOs adds further to the challenge. One can find support requests by grid beginners, users with specific applications, site administrators, or grid monitoring operators. With the GGUS infrastructure, EGEE provides a portal where users can find support in their daily use of the grid. The current use of the system has shown that the goal has been achieved with success. The grid user support model in EGEE can be captioned ‘regional support with central coordination’. Users can submit a support request to the central GGUS service, or to their Regional Operations' Centre (ROC) or to their Virtual Organisation helpdesks. Within GGUS there are appropriate support groups for all support requests. The ROCs and VOs and the other project wide groups such as middleware groups (JRA), network groups (NA), service groups (SA) and other grid infrastructures (OSG, NorduGrid, etc.) are connected via a...

  16. Cyberspace and Critical Information Infrastructures

    Directory of Open Access Journals (Sweden)

    Dan COLESNIUC

    2013-01-01

    Full Text Available Every economy of an advanced nation relies on information systems and interconnected networks, thus in order to ensure the prosperity of a nation, making cyberspace a secure place becomes as crucial as securing society. Cyber security means ensuring the safety of this cyberspace from threats which can take different forms, such as stealing secret information from national companies and government institutions, attacking infrastructure vital for the functioning of the nation or attacking the privacy of the single citizen. The critical information infrastructure (CII represents the indispensable "nervous system", that allow modern societies to work and live. Besides, without it, there would be no distribution of energy, no services like banking or finance, no air traffic control and so on. But at the same time, in the development process of CII, security was never considered a top priority and for this reason they are subject to a high risk in relation to the organized crime.

  17. Green(ing) infrastructure

    CSIR Research Space (South Africa)

    Van Wyk, Llewellyn V

    2014-03-01

    Full Text Available the generation of electricity from renewable sources such as wind, water and solar. Grey infrastructure – In the context of storm water management, grey infrastructure can be thought of as the hard, engineered systems to capture and convey runoff..., pumps, and treatment plants.  Green infrastructure reduces energy demand by reducing the need to collect and transport storm water to a suitable discharge location. In addition, green infrastructure such as green roofs, street trees and increased...

  18. CUMULVS: Collaborative infrastructure for developing distributed simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kohl, J.A.; Papadopoulos, P.M.; Geist, G.A. II

    1997-03-01

    The CUMULVS software environment provides remote collaboration among scientists by allowing them to dynamically attach to, view, and steer a running simulation. Users can interactively examine intermediate results on demand, saving effort for long-running applications gone awry. In addition, it provides fault tolerance to distributed applications via user-directed checkpointing, heterogeneous task migration and automatic restart. This talk describes CUMULVS and how this tool benefits scientists developing large distributed applications.

  19. Generic FPGA-Based Platform for Distributed IO in Proton Therapy Patient Safety Interlock System

    Science.gov (United States)

    Eichin, Michael; Carmona, Pablo Fernandez; Johansen, Ernst; Grossmann, Martin; Mayor, Alexandre; Erhardt, Daniel; Gomperts, Alexander; Regele, Harald; Bula, Christian; Sidler, Christof

    2017-06-01

    At the Paul Scherrer Institute (PSI) in Switzerland, cancer patients are treated with protons. Proton therapy at PSI has a long history and started in the 1980s. More than 30 years later, a new gantry has recently been installed in the existing facility. This new machine has been delivered by an industry partner. A big challenge is the integration of the vendor's safety system into the existing PSI environment. Different interface standards and the complexity of the system made it necessary to find a technical solution connecting an industry system to the existing PSI infrastructure. A novel very flexible distributed IO system based on field-programmable gate array (FPGA) technology was developed, supporting many different IO interface standards and high-speed communication links connecting the device to a PSI standard versa module eurocard-bus input output controller. This paper summarizes the features of the hardware technology, the FPGA framework with its high-speed communication link protocol, and presents our first measurement results.

  20. Securing the United States' power infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Happenny, Sean F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-08-01

    The United States’ power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power distribution networks utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Demonstrating security in embedded systems is another research area PNNL is tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the networks protecting them are becoming easier to breach. Providing a virtual power substation network to each student team at the National Collegiate Cyber Defense Competition, thereby supporting the education of future cyber security professionals, is another way PNNL is helping to strengthen the security of the nation’s power infrastructure.

  1. The Danish transport infrastructure 2030. Report from the Danish Infrastructure Commission; Danmarks transportinfrastruktur 2030. Betaenkning fra Infrastrukturkommissionen

    Energy Technology Data Exchange (ETDEWEB)

    2008-01-15

    production and distribution of goods become simpler and less expensive, because faster and more reliable delivery to the consumers is ensured. And high mobility contributes to businesses being able to attract the right manpower. At the same time, it is important to be aware that the development in the climate and environmental areas may influence our planning of infrastructure as well as urban planning. (au)

  2. The Impact of Distributed Generation on Distribution Networks ...

    African Journals Online (AJOL)

    Their advantages are the ability to reduce or postpone the need for investment in the transmission and distribution infrastructure when optimally located; the ability to reduce technical losses within the transmission and distribution networks as well as general improvement in power quality and system reliability. This paper ...

  3. A Survey of Software Infrastructures and Frameworks for Ubiquitous Computing

    Directory of Open Access Journals (Sweden)

    Christoph Endres

    2005-01-01

    Full Text Available In this survey, we discuss 29 software infrastructures and frameworks which support the construction of distributed interactive systems. They range from small projects with one implemented prototype to large scale research efforts, and they come from the fields of Augmented Reality (AR, Intelligent Environments, and Distributed Mobile Systems. In their own way, they can all be used to implement various aspects of the ubiquitous computing vision as described by Mark Weiser [60]. This survey is meant as a starting point for new projects, in order to choose an existing infrastructure for reuse, or to get an overview before designing a new one. It tries to provide a systematic, relatively broad (and necessarily not very deep overview, while pointing to relevant literature for in-depth study of the systems discussed.

  4. Computing platforms for software-defined radio

    CERN Document Server

    Nurmi, Jari; Isoaho, Jouni; Garzia, Fabio

    2017-01-01

    This book addresses Software-Defined Radio (SDR) baseband processing from the computer architecture point of view, providing a detailed exploration of different computing platforms by classifying different approaches, highlighting the common features related to SDR requirements and by showing pros and cons of the proposed solutions. Coverage includes architectures exploiting parallelism by extending single-processor environment (such as VLIW, SIMD, TTA approaches), multi-core platforms distributing the computation to either a homogeneous array or a set of specialized heterogeneous processors, and architectures exploiting fine-grained, coarse-grained, or hybrid reconfigurability. Describes a computer engineering approach to SDR baseband processing hardware; Discusses implementation of numerous compute-intensive signal processing algorithms on single and multicore platforms; Enables deep understanding of optimization techniques related to power and energy consumption of multicore platforms using several basic a...

  5. KeyWare: an open wireless distributed computing environment

    Science.gov (United States)

    Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir

    1995-12-01

    Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.

  6. EVALUATION OF FREE PLATFORMS FOR DELIVERY OF MASSIVE OPEN ONLINE COURSES (MOOCS

    Directory of Open Access Journals (Sweden)

    Airton ZANCANARO

    2017-01-01

    Full Text Available For the hosting, management and delivery of Massive Open Online Courses (MOOC it is necessary a technological infrastructure that supports it. Various educational institutions do not have or do not wish to invest in such a structure, possibly because MOOCs are not yet part of official programs of universities, but initiatives by a particular teacher or a research group. Focusing on this problem, this study seeks to identify platforms that make it possible to create, host and provide courses free of charges for the offeror; find in the respective literature, the basic requirements for MOOC platforms and to evaluate the platforms based on the raised requirements. In order to identify the platforms, information was sought in scientific articles and websites dealing with the comparison of platforms and listing the existing MOOC providers. For the definition of evaluation requirements, there was a search in the Web of Science and Scopus databases, looking for the term "Massive Open Online Courses". After applying some filters, 62 works that address platforms and technology were selected for analysis. As a result there is the identification of six platforms that allow the free supply of courses, the proposal for 14 requirements for reviewing them and a frame containing the evaluation of the identified platforms. This assessment is important since it brings knowledge as a basis for selecting a platform that is the most suitable one in terms of the chosen structure and method to store, manage and deliver courses in MOOC format.

  7. Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform

    Directory of Open Access Journals (Sweden)

    List Markus

    2017-06-01

    Full Text Available Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.

  8. Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform.

    Science.gov (United States)

    List, Markus

    2017-06-10

    Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.

  9. Greening infrastructure

    CSIR Research Space (South Africa)

    Van Wyk, Llewellyn V

    2014-10-01

    Full Text Available The development and maintenance of infrastructure is crucial to improving economic growth and quality of life (WEF 2013). Urban infrastructure typically includes bulk services such as water, sanitation and energy (typically electricity and gas...

  10. Biophysical, infrastructural and social heterogeneities explain spatial distribution of waterborne gastrointestinal disease burden in Mexico City

    Science.gov (United States)

    Baeza, Andrés; Estrada-Barón, Alejandra; Serrano-Candela, Fidel; Bojórquez, Luis A.; Eakin, Hallie; Escalante, Ana E.

    2018-06-01

    Due to unplanned growth, large extension and limited resources, most megacities in the developing world are vulnerable to hydrological hazards and infectious diseases caused by waterborne pathogens. Here we aim to elucidate the extent of the relation between the spatial heterogeneity of physical and socio-economic factors associated with hydrological hazards (flooding and scarcity) and the spatial distribution of gastrointestinal disease in Mexico City, a megacity with more than 8 million people. We applied spatial statistics and multivariate regression analyses to high resolution records of gastrointestinal diseases during two time frames (2007–2009 and 2010–2014). Results show a pattern of significant association between water flooding events and disease incidence in the city center (lowlands). We also found that in the periphery (highlands), higher incidence is generally associated with household infrastructure deficiency. Our findings suggest the need for integrated and spatially tailored interventions by public works and public health agencies, aimed to manage socio-hydrological vulnerability in Mexico City.

  11. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  12. Flowscapes : Infrastructure as landscape, landscape as infrastructure. Graduation Lab Landscape Architecture 2012/2013

    NARCIS (Netherlands)

    Nijhuis, S.; Jauslin, D.; De Vries, C.

    2012-01-01

    Flowscapes explores infrastructure as a type of landscape and landscape as a type of infrastructure, and is focused on landscape architectonic design of transportation-, green- and water infrastructures. These landscape infrastructures are considered armatures for urban and rural development. With

  13. Temperature profile data from XBT casts in a world wide distribution from multiple platforms from 04 September 2002 to 18 November 2002 (NODC Accession 0000831)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature profile data were collected using CTD casts from LYKES COMMANDER and other platforms in a world wide distribution from 04 September 2002 to 18 November...

  14. The impact of northern gas on North American gas infrastructure

    International Nuclear Information System (INIS)

    Letwin, S.

    2004-01-01

    The three business units that Enbridge operates are crude oil pipelines; natural gas liquids (NGL) transportation; and gas transmission and distribution. The need for more infrastructure will increase as the demand for natural gas increases. This presentation outlined the issues that surround and sometimes impede infrastructure development. It also emphasized the need for northern gas supply at a time when conventional natural gas supplies are decreasing and demand is growing. Additional LNG supply is required along with new supply from Alaska, Mackenzie Delta and the east coast. The issue of a secure source of supply was discussed along with northern gas expectations. It is expected that Mackenzie Delta gas (1.2 bcf/day) will be available by 2008 to 2010 and Alaska North Slope gas (4 bcf/day) will be available from 2012 to 2014. Gas demand by industrial, residential, commercial and power generation sectors across North America was illustrated. The challenge lies in creating infrastructure to move the supply to where it is most in demand. General infrastructure issues were reviewed, such as prices, regulatory streamlining, lead times, stakeholder issues and supporting infrastructure. 19 figs

  15. Morphological indicators of growth stages in carbonates platform evolution: comparison between present-day and Miocene platforms of Northern Borneo, Malaysia.

    Science.gov (United States)

    Pierson, B.; Menier, D.; Ting, K. K.; Chalabi, A.

    2012-04-01

    Satellite images of present-day reefs and carbonate platforms of the Celebes Sea, east of Sabah, Malaysia, exhibit large-scale features indicative of the recent evolution of the platforms. These include: (1) multiple, sub-parallel reef rims at the windward margin, suggestive of back-stepping of the platform margin; (2) contraction of the platform, possibly as a result of recent sea level fluctuations; (3) colonization of the internal lagoons by polygonal reef structures and (4) fragmentation of the platforms and creation of deep channels separating platforms that used to be part of a single entity. These features are analogue to what has been observed on seismic attribute maps of Miocene carbonate platforms of Sarawak. An analysis of several growth stages of a large Miocene platform, referred to as the Megaplatform, shows that the platform evolves in function of syn-depositional tectonic movements and sea level fluctuations that result in back-stepping of the margin, illustrated by multiple reef rims, contraction of the platform, the development of polygonal structures currently interpreted as karstic in origin and fragmentation of the megaplatform in 3 sub-entities separated by deep channels that precedes the final demise of the whole platform. Comparing similar features on present-day to platforms and Miocene platforms leads to a better understanding of the growth history of Miocene platforms and to a refined predictability of reservoir and non-reservoir facies distribution.

  16. Integrating sea floor observatory data: the EMSO data infrastructure

    Science.gov (United States)

    Huber, Robert; Azzarone, Adriano; Carval, Thierry; Doumaz, Fawzi; Giovanetti, Gabriele; Marinaro, Giuditta; Rolin, Jean-Francois; Beranzoli, Laura; Waldmann, Christoph

    2013-04-01

    The European research infrastructure EMSO is a European network of fixed-point, deep-seafloor and water column observatories deployed in key sites of the European Continental margin and Arctic. It aims to provide the technological and scientific framework for the investigation of the environmental processes related to the interaction between the geosphere, biosphere, and hydrosphere and for a sustainable management by long-term monitoring also with real-time data transmission. Since 2006, EMSO is on the ESFRI (European Strategy Forum on Research Infrastructures) roadmap and has entered its construction phase in 2012. Within this framework, EMSO is contributing to large infrastructure integration projects such as ENVRI and COOPEUS. The EMSO infrastructure is geographically distributed in key sites of European waters, spanning from the Arctic, through the Atlantic and Mediterranean Sea to the Black Sea. It is presently consisting of thirteen sites which have been identified by the scientific community according to their importance respect to Marine Ecosystems, Climate Changes and Marine GeoHazards. The data infrastructure for EMSO is being designed as a distributed system. Presently, EMSO data collected during experiments at each EMSO site are locally stored and organized in catalogues or relational databases run by the responsible regional EMSO nodes. Three major institutions and their data centers are currently offering access to EMSO data: PANGAEA, INGV and IFREMER. In continuation of the IT activities which have been performed during EMSOs twin project ESONET, EMSO is now implementing the ESONET data architecture within an operational EMSO data infrastructure. EMSO aims to be compliant with relevant marine initiatives such as MyOceans, EUROSITES, EuroARGO, SEADATANET and EMODNET as well as to meet the requirements of international and interdisciplinary projects such as COOPEUS and ENVRI, EUDAT and iCORDI. A major focus is therefore set on standardization and

  17. Multi-Level Data-Security and Data-Protection in a Distributed Search Infrastructure for Digital Medical Samples.

    Science.gov (United States)

    Witt, Michael; Krefting, Dagmar

    2016-01-01

    Human sample data is stored in biobanks with software managing digital derived sample data. When these stand-alone components are connected and a search infrastructure is employed users become able to collect required research data from different data sources. Data protection, patient rights, data heterogeneity and access control are major challenges for such an infrastructure. This dissertation will investigate concepts for a multi-level security architecture to comply with these requirements.

  18. NADIM-Travel: A Multiagent Platform for Travel Services Aggregation

    OpenAIRE

    Ben Ameur, Houssein; Bédard, François; Vaucher, Stéphane; Kropf, Peter; Chaib-draaa, Brahim; Gérin-Lajoie, Robert

    2010-01-01

    With the Internet as a growing channel for travel services distribution, sophisticated travel services aggregators are increasingly in demand. A travel services aggregation platform should be able to manage the heterogeneous characteristics of the many existing travel services. It should also be as scalable, robust, and flexible as possible. Using multiagent technology, we designed and implemented a multiagent platform for travel services aggregation called NADIM-Travel. In this platform, a p...

  19. Temperature profile data from XBT casts in a world wide distribution from multiple platforms from 20 February 2003 to 24 April 200 (NODC Accession 0001019)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature profile data were collected using CTD casts from LYKES RAIDER and other platforms in a world wide distribution from 20 February 2003 to 24 April 2003....

  20. Temperature profile data from XBT casts from MULTIPLE PLATFORMS from a World-Wide distribution from 02 January 1990 to 31 December 1995 (NODC Accession 0001268)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — XBT data were collected from MULTIPLE PLATFORMS from a World-Wide distribution from 02 January 1990 to 31 December 1995. Data were submitted by the UK Hydrographic...

  1. KubeNow: an On-Demand Cloud-Agnostic Platform for Microservices-Based Research Environments

    OpenAIRE

    Capuccini, Marco; Larsson, Anders; Carone, Matteo; Novella, Jon Ander; Sadawi, Noureddin; Gao, Jianliang; Toor, Salman; Spjuth, Ola

    2018-01-01

    Microservices platforms, such as Kubernetes, have received a great deal of attention lately as they offer elastic and resilient orchestration for services implemented as portable, light-weight, software containers. While current solutions for cloud-based Kubernetes deployments are mainly focusing on maintaining long-running infrastructure, aimed to sustain highly-available services, the scientific community is embracing software containers for more demanding data analysis. To this extent, the...

  2. Enabling European Archaeological Research: The ARIADNE E-Infrastructure

    Directory of Open Access Journals (Sweden)

    Nicola Aloia

    2017-03-01

    Full Text Available Research e-infrastructures, digital archives and data services have become important pillars of scientific enterprise that in recent decades has become ever more collaborative, distributed and data-intensive. The archaeological research community has been an early adopter of digital tools for data acquisition, organisation, analysis and presentation of research results of individual projects. However, the provision of e-infrastructure and services for data sharing, discovery, access and re-use has lagged behind. This situation is being addressed by ARIADNE: the Advanced Research Infrastructure for Archaeological Dataset Networking in Europe. This EU-funded network has developed an e-infrastructure that enables data providers to register and provide access to their resources (datasets, collections through the ARIADNE data portal, facilitating discovery, access and other services across the integrated resources. This article describes the current landscape of data repositories and services for archaeologists in Europe, and the issues that make interoperability between them difficult to realise. The results of the ARIADNE surveys on users' expectations and requirements are also presented. The main section of the article describes the architecture of the e-infrastructure, core services (data registration, discovery and access and various other extant or experimental services. The on-going evaluation of the data integration and services is also discussed. Finally, the article summarises lessons learned, and outlines the prospects for the wider engagement of the archaeological research community in sharing data through ARIADNE.

  3. German crowd-investing platforms: Literature review and survey

    Directory of Open Access Journals (Sweden)

    David Grundy

    2016-12-01

    Full Text Available This article presents a comprehensive overview of the current German crowd-investing market drawing on a data-set of 31 crowd-investing platforms including the analysis of 265 completed projects. While crowd-investing market still only represents a niche in the German venture capital market, there is potential for an increase in both market volume and in average project investment. The market share is distributed among a few crowd-investing platforms with high entry barriers for new platforms although platforms that specialise in certain sectors have managed to successfully enter the market. German crowd-investing platforms are found to promote mainly internet-based enterprises (36% followed by projects in real estate (24% and green projects (19%, with the median money raised 100,000 euro.

  4. User and Machine Authentication and Authorization Infrastructure for Distributed Wireless Sensor Network Testbeds

    Directory of Open Access Journals (Sweden)

    Gerald Wagenknecht

    2013-03-01

    Full Text Available The intention of an authentication and authorization infrastructure (AAI is to simplify and unify access to different web resources. With a single login, a user can access web applications at multiple organizations. The Shibboleth authentication and authorization infrastructure is a standards-based, open source software package for web single sign-on (SSO across or within organizational boundaries. It allows service providers to make fine-grained authorization decisions for individual access of protected online resources. The Shibboleth system is a widely used AAI, but only supports protection of browser-based web resources. We have implemented a Shibboleth AAI extension to protect web services using Simple Object Access Protocol (SOAP. Besides user authentication for browser-based web resources, this extension also provides user and machine authentication for web service-based resources. Although implemented for a Shibboleth AAI, the architecture can be easily adapted to other AAIs.

  5. Temperature profile data from XBT casts in a world wide distribution from multiple platforms from 02 April 2003 to 21 May 2003 (NODC Accession 0001042)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature profile data were collected using XBT casts from SEA-LAND DEFENDER and other platforms in a world wide distribution from 02 April 2003 to 21 May 2003....

  6. Oceanic Platform of the Canary Islands: an ocean testbed for ocean energy converters

    Science.gov (United States)

    González, Javier; Hernández-Brito, Joaquín.; Llinás, Octavio

    2010-05-01

    The Oceanic Platform of the Canary Islands (PLOCAN) is a Governmental Consortium aimed to build and operate an off-shore infrastructure to facilitate the deep sea research and speed up the technology associated. This Consortium is overseen by the Spanish Ministry of Science and Innovation and the Canarian Agency for Research and Innovation. The infrastructure consists of an oceanic platform located in an area with depths between 50-100 meters, close to the continental slope and four kilometers off the coast of Gran Canaria, in the archipelago of the Canary Islands. The process of construction will start during the first months of 2010 and is expected to be finished in mid-year 2011. PLOCAN serves five strategic lines: an integral observatory able to explore from the deep ocean to the atmosphere, an ocean technology testbed, a base for underwater vehicles, an innovation platform and a highly specialized training centre. Ocean energy is a suitable source to contribute the limited mix-energy conformed in the archipelago of the Canary Islands with a total population around 2 million people unequally distributed in seven islands. Islands of Gran Canaria and Tenerife support the 80% of the total population with 800.000 people each. PLOCAN will contribute to develop the ocean energy sector establishing a marine testbed allowing prototypes testing at sea under a meticulous monitoring network provided by the integral observatory, generating valuable information to developers. Reducing costs throughout an integral project management is an essential objective to be reach, providing services such as transportation, customs and administrative permits. Ocean surface for testing activities is around 8 km2 with a depth going from 50 to 100 meters, 4km off the coast. Selected areas for testing have off-shore wind power conditions around 500-600 W/m2 and wave power conditions around 6 kW/m in the East coast and 10 kW/m in the North coast. Marine currents in the Canary Islands are

  7. ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond

    CERN Document Server

    van Gemmeren, Peter; The ATLAS collaboration; Malon, David; Vaniachine, Alexandre

    2015-01-01

    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework’s state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires ...

  8. Engineering Information Infrastructure for Product Lifecycle Managment

    Science.gov (United States)

    Kimura, Fumihiko

    For proper management of total product life cycle, it is fundamentally important to systematize design and engineering information about product systems. For example, maintenance operation could be more efficiently performed, if appropriate parts design information is available at the maintenance site. Such information shall be available as an information infrastructure for various kinds of engineering operations, and it should be easily accessible during the whole product life cycle, such as transportation, marketing, usage, repair/upgrade, take-back and recycling/disposal. Different from the traditional engineering database, life cycle support information has several characteristic requirements, such as flexible extensibility, distributed architecture, multiple viewpoints, long-time archiving, and product usage information, etc. Basic approaches for managing engineering information infrastructure are investigated, and various information contents and associated life cycle applications are discussed.

  9. Nuclear Energy Infrastructure Database Description and User's Manual

    International Nuclear Information System (INIS)

    Heidrich, Brenden

    2015-01-01

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE's infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from a variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.

  10. A Big Data Platform for Storing, Accessing, Mining and Learning Geospatial Data

    Science.gov (United States)

    Yang, C. P.; Bambacus, M.; Duffy, D.; Little, M. M.

    2017-12-01

    Big Data is becoming a norm in geoscience domains. A platform that is capable to effiently manage, access, analyze, mine, and learn the big data for new information and knowledge is desired. This paper introduces our latest effort on developing such a platform based on our past years' experiences on cloud and high performance computing, analyzing big data, comparing big data containers, and mining big geospatial data for new information. The platform includes four layers: a) the bottom layer includes a computing infrastructure with proper network, computer, and storage systems; b) the 2nd layer is a cloud computing layer based on virtualization to provide on demand computing services for upper layers; c) the 3rd layer is big data containers that are customized for dealing with different types of data and functionalities; d) the 4th layer is a big data presentation layer that supports the effient management, access, analyses, mining and learning of big geospatial data.

  11. Water and Carbon Footprints for Sustainability Analysis of Urban Infrastructure

    Science.gov (United States)

    Water and transportation infrastructures define spatial distribution of urban population and economic activities. In this context, energy and water consumed per capita are tangible measures of how efficient water and transportation systems are constructed and operated. At a hig...

  12. A Statistical Approach to Planning Reserved Electric Power for Railway Infrastructure Administration

    OpenAIRE

    Brabec, M. (Marek); Pelikán, E. (Emil); Konár, O. (Ondřej); Kasanický, I. (Ivan); Juruš, P. (Pavel); Sadil, J.; Blažek, P.

    2013-01-01

    One of the requirements on railway infrastructure administration is to provide electricity for day-to-day operation of railways. We propose a statistically based approach for the estimation of maximum 15-minute power within a calendar month for a given region. This quantity serves as a basis of contracts between railway infrastructure administration and electricity distribution system operator. We show that optimization of the prediction is possible, based on underlying loss function deriv...

  13. GEMSS: grid-infrastructure for medical service provision.

    Science.gov (United States)

    Benkner, S; Berti, G; Engelbrecht, G; Fingberg, J; Kohring, G; Middleton, S E; Schmidt, R

    2005-01-01

    The European GEMSS Project is concerned with the creation of medical Grid service prototypes and their evaluation in a secure service-oriented infrastructure for distributed on demand/supercomputing. Key aspects of the GEMSS Grid middleware include negotiable QoS support for time-critical service provision, flexible support for business models, and security at all levels in order to ensure privacy of patient data as well as compliance to EU law. The GEMSS Grid infrastructure is based on a service-oriented architecture and is being built on top of existing standard Grid and Web technologies. The GEMSS infrastructure offers a generic Grid service provision framework that hides the complexity of transforming existing applications into Grid services. For the development of client-side applications or portals, a pluggable component framework has been developed, providing developers with full control over business processes, service discovery, QoS negotiation, and workflow, while keeping their underlying implementation hidden from view. A first version of the GEMSS Grid infrastructure is operational and has been used for the set-up of a Grid test-bed deploying six medical Grid service prototypes including maxillo-facial surgery simulation, neuro-surgery support, radio-surgery planning, inhaled drug-delivery simulation, cardiovascular simulation and advanced image reconstruction. The GEMSS Grid infrastructure is based on standard Web Services technology with an anticipated future transition path towards the OGSA standard proposed by the Global Grid Forum. GEMSS demonstrates that the Grid can be used to provide medical practitioners and researchers with access to advanced simulation and image processing services for improved preoperative planning and near real-time surgical support.

  14. Essays on the Impacts of Geography and Institutions on Access to Energy and Public Infrastructure Services

    Science.gov (United States)

    Archibong, Belinda

    While previous literature has emphasized the importance of energy and public infrastructure services for economic development, questions surrounding the implications of unequal spatial distribution in access to these resources remain, particularly in the developing country context. This dissertation provides evidence on the nature, origins and implications of this distribution uniting three strands of research from the development and political economy, regional science and energy economics fields. The dissertation unites three papers on the nature of spatial inequality of access to energy and infrastructure with further implications for conflict risk , the historical institutional and biogeographical determinants of current distribution of access to energy and public infrastructure services and the response of households to fuel price changes over time. Chapter 2 uses a novel survey dataset to provide evidence for spatial clustering of public infrastructure non-functionality at schools by geopolitical zone in Nigeria with further implications for armed conflict risk in the region. Chapter 3 investigates the drivers of the results in chapter 2, exploiting variation in the spatial distribution of precolonial institutions and geography in the region, to provide evidence for the long-term impacts of these factors on current heterogeneity of access to public services. Chapter 4 addresses the policy implications of energy access, providing the first multi-year evidence on firewood demand elasticities in India, using the spatial variation in prices for estimation.

  15. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    Science.gov (United States)

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  16. The Study of Pallet Pooling Information Platform Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Jia-bin Li

    2018-01-01

    Full Text Available Effective implementation of pallet pooling system needs a strong information platform to support. Through the analysis of existing pallet pooling information platform (PPIP, the paper pointed out that the existing studies of PPIP are mainly based on traditional IT infrastructures and technologies which have software, hardware, resource utilization, and process restrictions. Because of the advantages of cloud computing technology like strong computing power, high flexibility, and low cost which meet the requirements of the PPIP well, this paper gave a PPIP architecture of two parts based on cloud computing: the users client and the cloud services. The cloud services include three layers, which are IaaS, PaaS, and SaaS. The method of how to deploy PPIP based on cloud computing is proposed finally.

  17. Computational Platform About Amazon Web Services (Aws Distributed Rendering

    Directory of Open Access Journals (Sweden)

    Gabriel Rojas-Albarracín

    2017-09-01

    Full Text Available Today has created a dynamic in which people require higher image quality in different media formats (games, movies, animations. Further definition usually requires image processing larger; this brings the need for increased computing power. This paper presents a case study in which the implementation of a low-cost platform on the Amazon cloud for parallel processing of images and animation.

  18. Temperature profile data collected using XBT casts from multiple platforms in a world wide distribution from 07 November 2001 to 24 July 2002 (NODC Accession 0000762)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature profile data were collected using XBT casts from OLEANDER, TAI HE, SEA-LAND ENTERPRISE, and other platforms in a world wide distribution. Data were...

  19. Managing a tier-2 computer centre with a private cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-01-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI

  20. Enhancing the Earth System Grid Authentication Infrastructure through Single Sign-On and Autoprovisioning

    Energy Technology Data Exchange (ETDEWEB)

    Siebenlist, Frank [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2009-01-01

    Climate scientists face an overarching need to efficiently access and manipulate climate model data. Increasingly, researchers must assemble and analyze large datasets that are archived in different formats on disparate platforms and must extract portions of datasets to compute statistical or diagnostic metrics in place. The need for a common virtual environment in which to access both climate model datasets and analysis tools is therefore keenly felt. The software infrastructure to support such an environment must not only provide ready access to climate data but must also facilitate the use of visualization software, diagnostic algorithms, and related resources. To this end, the Earth System Grid Center for Enabling Technologies (ESG-CET) was established in 2006 by the Scientific Discovery through Advanced Computing program of the U.S. Department of Energy through the Office of Advanced Scientific Computing Research and the Office Biological and Environmental Research within the Office of Science. ESG-CET is working to advance climate science by developing computational resources for accessing and managing model data that are physically located in distributed multiplatform archives. In this paper, we discuss recent development and implementation efforts by the Earth System Grid (ESG) concerning its security infrastructure. ESG's requirements are to make user logon as easy as possible and to facilitate the integration of security services and Grid components for both developers and system administrators. To meet that goal, we leverage existing primary authentication mechanisms, deploy a 'lightweight' but secure OpenID WebSSO, deploy a 'lightweight' X.509-PKI, and use autoprovisioning to ease the burden of security configuration management. We are close to completing the associated development and deployment.

  1. Distributed Data Management on the Petascale using Heterogeneous Grid Infrastructures with DQ2

    CERN Document Server

    Branco, M; Salgado, P; Lassnig, M

    2008-01-01

    We describe Don Quijote 2 (DQ2), a new approach to the management of large scientific datasets by a dedicated middleware. This middleware is designed to handle the data organisation and data movement on the petascale for the High-Energy Physics Experiment ATLAS at CERN. DQ2 is able to maintain a well-defined quality of service in a scalable way, guarantees data consistency for the collaboration and bridges the gap between EGEE, OSG and NorduGrid infrastructures to enable true interoperability. DQ2 is specifically designed to support the access and management of large scientific datasets produced by the ATLAS experiment using heterogeneous Grid infrastructures. The DQ2 middleware manages those datasets with global services, local site services and enduser interfaces. The global services, or central catalogues, are responsible for the mapping of individual files onto DQ2 datasets. The local site services are responsible for tracking files available on-site, managing data movement and guaranteeing consistency of...

  2. Smart Circuit Breaker Communication Infrastructure

    Directory of Open Access Journals (Sweden)

    Octavian Mihai MACHIDON

    2017-11-01

    Full Text Available The expansion of the Internet of Things has fostered the development of smart technologies in fields such as power transmission and distribution systems (as is the Smart Grid and also in regard to home automation (the Smart Home concept. This paper addresses the network communication infrastructure for a Smart Circuit Breaker system, a novel application at the edge of the two afore-mentioned systems (Smart Grid and Smart Home. Such a communication interface has high requirements from functionality, performance and security point of views, given the large amount of distributed connected elements and the real-time information transmission and system management. The paper describes the design and implementation of the data server, Web interface and the embedded networking capabilities of the smart circuit breakers, underlining the protocols and communication technologies used.

  3. Experience using EPICS on PC platforms

    International Nuclear Information System (INIS)

    Hill, J.O.; Kasemire, K.U.

    1997-03-01

    The Experimental Physics and Industrial Control System (EPICS) has been widely adopted in the accelerator community. Although EPICS is available on many platforms, the majority of implementations have used UNIX workstations as clients, and VME- or VXI-based processors for distributed input output controllers. Recently, a significant portion of EPICS has been ported to personal computer (PC) hardware platforms running Microsoft's operating systems, and also Wind River System's real time vxWorks operating system. This development should significantly reduce the cost of deploying EPICS systems, and the prospect of using EPICS together with the many high quality commercial components available for PC platforms is also encouraging. A hybrid system using both PC and traditional platforms is currently being implemented at LANL for LEDA, the low energy demonstration accelerator under construction as part of the Accelerator Production of Tritium (APT) project. To illustrate these developments the authors compare their recent experience deploying a PC-based EPICS system with experience deploying similar systems based on traditional (UNIX-hosted) EPICS hardware and software platforms

  4. Temperature profile data collected using XBT casts from multiple platforms in a world wide distribution from 01 March 2002 to 26 August 2002 (NODC Accession 0000777)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature profile data were collected using XBT casts from MELBOURNE STAR and other platforms in a world wide distribution. Data were collected from 01 March 2002...

  5. Securing Distributed Research

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Global science calls for global infrastructure. A typical large-scale research group will use a suite of international services and involve hundreds of collaborating institutes and users from around the world. How can these users access those services securely? How can their digital identities be established, verified and maintained? We will explore the motivation for distributed authentication and the ways in which research communities are addressing the challenges. We will discuss security incident response in distributed environments - a particular challenge for the operators of these infrastructures. Through this course you should gain an overview of federated identity technologies and protocols, including x509 certificates, SAML and OIDC.

  6. Is debt replacing equity in regulated privatized infrastructure in developing countries?

    OpenAIRE

    da Silva, Luis Correia; Estache, Antonio; Jarvela, Sakari

    2004-01-01

    The main purpose of this paper is to describe the evolution of the financing structure of regulated privatized utilities and transport companies. To do so, the authors rely on a sample of 121 utilities distributed over 16 countries, and 23 transport infrastructure operators and 23 transport services operators distributed over 23 countries. They show that leverage rates vary significantly a...

  7. On the Impact of using Public Network Communication Infrastructure for Voltage Control Coordination in Smart Grid Scenario

    DEFF Research Database (Denmark)

    Shahid, Kamal; Petersen, Lennart; Iov, Florin

    2017-01-01

    voltage controlled distribution system. A cost effective way to connect the ReGen plants to the control center is to consider the existing public network infrastructure. This paper, therefore, illustrates the impact of using the existing public network communication infrastructure for online voltage...

  8. 3D spatial information infrastructure : The case of Port Rotterdam

    NARCIS (Netherlands)

    Zlatanova, S.; Beetz, J.

    2012-01-01

    The development and maintenance of the infrastructure, facilities, logistics and other assets of the Port of Rotterdam requires a broad spectrum of heterogeneous information. This information concerns features, which are spatially distributed above ground, underground, in the air and in the water.

  9. Armenia - Irrigation Infrastructure

    Data.gov (United States)

    Millennium Challenge Corporation — This study evaluates irrigation infrastructure rehabilitation in Armenia. The study separately examines the impacts of tertiary canals and other large infrastructure...

  10. Cyber Vulnerabilities Within Critical Infrastructure: The Flaws of Industrial Control Systems in the Oil and Gas Industry

    Science.gov (United States)

    Alpi, Danielle Marie

    The 16 sectors of critical infrastructure in the US are susceptible to cyber-attacks. Potential attacks come from internal and external threats. These attacks target the industrial control systems (ICS) of companies within critical infrastructure. Weakness in the energy sector's ICS, specifically the oil and gas industry, can result in economic and ecological disaster. The purpose of this study was to establish means for oil companies to identify and stop cyber-attacks specifically APT threats. This research reviewed current cyber vulnerabilities and ways in which a cyber-attack may be deterred. This research found that there are insecure devices within ICS that are not regularly updated. Therefore, security issues have amassed. Safety procedures and training thereof are often neglected. Jurisdiction is unclear in regard to critical infrastructure. The recommendations this research offers are further examination of information sharing methods, development of analytic platforms, and better methods for the implementation of defense-in-depth security measures.

  11. NCI's Transdisciplinary High Performance Scientific Data Platform

    Science.gov (United States)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  12. Nuclear Energy Infrastructure Database Fitness and Suitability Review

    Energy Technology Data Exchange (ETDEWEB)

    Heidrich, Brenden [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation (NE-4) initiated the Nuclear Energy-Infrastructure Management Project by tasking the Nuclear Science User Facilities (NSUF) to create a searchable and interactive database of all pertinent NE supported or related infrastructure. This database will be used for analyses to establish needs, redundancies, efficiencies, distributions, etc. in order to best understand the utility of NE’s infrastructure and inform the content of the infrastructure calls. The NSUF developed the database by utilizing data and policy direction from a wide variety of reports from the Department of Energy, the National Research Council, the International Atomic Energy Agency and various other federal and civilian resources. The NEID contains data on 802 R&D instruments housed in 377 facilities at 84 institutions in the US and abroad. A Database Review Panel (DRP) was formed to review and provide advice on the development, implementation and utilization of the NEID. The panel is comprised of five members with expertise in nuclear energy-associated research. It was intended that they represent the major constituencies associated with nuclear energy research: academia, industry, research reactor, national laboratory, and Department of Energy program management. The Nuclear Energy Infrastructure Database Review Panel concludes that the NSUF has succeeded in creating a capability and infrastructure database that identifies and documents the major nuclear energy research and development capabilities across the DOE complex. The effort to maintain and expand the database will be ongoing. Detailed information on many facilities must be gathered from associated institutions added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements.

  13. Scalable Multi-Platform Distribution of Spatial 3d Contents

    Science.gov (United States)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  14. Optimizing Mexico’s Water Distribution Services

    Science.gov (United States)

    2011-10-28

    government pursued a decentralization policy in the water distribution infrastructure sector.5 This is evident in Article 115 of the Mexican Constitution ...infrastructure, monitoring water 5 Ibid, 47. 6 Mexican Constitution . http://www.oas.org/juridico...54 Apogee Research International, Ltd., Innovative Financing of Water and Wastewater Infrastructure in the NAFTA Partners: A Focus on

  15. A software platform to develop and execute kitting tasks on industrial cyber-physical systems

    DEFF Research Database (Denmark)

    Rovida, Francesco

    2017-01-01

    The current material handling infrastructure associated with manufacturing and assembly operations still register a great presence of human work for highly repetitive tasks. A major contributing factor for the low automation is that current manufacturing robots have little or no understanding of ....... A platform where skills, similarly to computer or smartphone applications, can be installed and removed from heterogeneous robots with few elementary steps....

  16. Intelligent social infrastructure technology. Infrastructure technology support ultra-reliable society; Chiteki shakai kiban kogaku gijutsu. Choanshin shakai wo sasaeru kiban gijutsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    This survey was conducted to construct the core of intelligent social infrastructure technology (ISIT), and to investigate its practical application to industries and society. For realizing the ultra-safe and ultra-reliable society, it is necessary to develop the ISIT which can integrate various social infrastructures, such as architecture, city, energy, and lifeline systems required for living. For the systematization of cities, it is necessary to process and control the intelligent information by holding integrated and transverse information in common, as to logistics, lifeline, communication, monitoring, and control. For the communication engineering, the centralized systems are acceleratingly to be converted into the distributed network systems. For the mechanical engineering, intelligent control and robot technology are required. For the architectural engineering, a concept exceeding the conventional antiseismic structure idea is investigated. It is necessary to develop a new information technology providing an intelligent social infrastructures by merging the information networks and the physical world seamlessly. Necessity of ISIT is large for constructing the intelligent and ultra-reliable society consisting of these integrated and organized networks. 84 refs., 68 figs., 6 tabs.

  17. Sustainable infrastructure system modeling under uncertainties and dynamics

    Science.gov (United States)

    Huang, Yongxi

    Infrastructure systems support human activities in transportation, communication, water use, and energy supply. The dissertation research focuses on critical transportation infrastructure and renewable energy infrastructure systems. The goal of the research efforts is to improve the sustainability of the infrastructure systems, with an emphasis on economic viability, system reliability and robustness, and environmental impacts. The research efforts in critical transportation infrastructure concern the development of strategic robust resource allocation strategies in an uncertain decision-making environment, considering both uncertain service availability and accessibility. The study explores the performances of different modeling approaches (i.e., deterministic, stochastic programming, and robust optimization) to reflect various risk preferences. The models are evaluated in a case study of Singapore and results demonstrate that stochastic modeling methods in general offers more robust allocation strategies compared to deterministic approaches in achieving high coverage to critical infrastructures under risks. This general modeling framework can be applied to other emergency service applications, such as, locating medical emergency services. The development of renewable energy infrastructure system development aims to answer the following key research questions: (1) is the renewable energy an economically viable solution? (2) what are the energy distribution and infrastructure system requirements to support such energy supply systems in hedging against potential risks? (3) how does the energy system adapt the dynamics from evolving technology and societal needs in the transition into a renewable energy based society? The study of Renewable Energy System Planning with Risk Management incorporates risk management into its strategic planning of the supply chains. The physical design and operational management are integrated as a whole in seeking mitigations against the

  18. THE IMPACT OF ECONOMIC INFRASTRUCTURE ON LONG TERM ECONOMIC GROWTH IN BOTSWANA

    Directory of Open Access Journals (Sweden)

    Strike Mbulawa

    2017-02-01

    Full Text Available The growth rate for the Botswana economy has slowed down in recent years. This has been explained by weak global demand in minerals, subdued commodity prices and persistent electricity supply problems. The government is making efforts to diversify the economy to tap from other sources of growth. The government has come with two initiatives to boast growth: increasing expenditure on roads and improved generation of electricity. Literature has failed to agree on the causal linkage between growth and infrastructure development.  Previous studies employed different measures of infrastructure development and models resulting in conflicting findings. As a point of departure this study uses a log linear model and different measures of growth and infrastructure to examine the link between the two variables in the context of Botswana. Using vector error correction model and Ordinary Least Squares the study finds that long term economic growth is explained by both measures of infrastructure (electricity distribution and maintenance of roads. The impact of the former was more pronounced than the impact of the later. Evidence supports the infrastructure led growth hypothesis.

  19. Alternative energy and distributed generation: thinking generations ahead

    International Nuclear Information System (INIS)

    Hunt, P.D.

    2001-01-01

    Alternative Energy will be discussed in the context of Distributed Generation, which is defined as a delivery platform for micro-power generation, close to the end-users, that can also supplement regional electricity grids. Many references in the paper pertain to Alberta. This is for two reasons: First, familiarity by the author, and more importantly, Alberta is the first region in Canada that has de-regulated it's electricity sector. De-regulation allows independent and smaller power generators to enter the market. Focussing on Alberta, with some references to other Canadian provinces and USA, electricity consumption trends will be reviewed and the pressures to decentralize electricity generation discussed. Re-structuring of the electricity sector, convergence of power generation and natural gas industries, advances in technologies, and environmental concerns are collectively contributing to the creation of a new business called 'Distributed Generation'. Efficiency benefits of combined heat and power associated with the more prominent emerging distributed generation technologies like micro-turbines and fuel cells, will be highlighted. Areas of research, development and demonstration that will enable the successful deployment of Distributed Generation will be suggested with respect to Generation Technologies, Systems Controls, Supporting Infrastructure, and Socio-Political Barriers. Estimates of investments in the various alternative energy technologies will be presented. Using current trends and emerging technologies the Paper will conclude with some predictions of future scenarios. (author)

  20. INFRASTRUCTURE

    CERN Multimedia

    Andrea Gaddi

    With all the technical services running, the attention has moved toward the next shutdown that will be spent to perform those modifications needed to enhance the reliability of CMS Infrastructures. Just to give an example for the cooling circuit, a set of re-circulating bypasses will be installed into the TS/CV area to limit the pressure surge when a circuit is partially shut-off. This problem has affected especially the Endcap Muon cooling circuit in the past. Also the ventilation of the UXC55 has to be revisited, allowing the automatic switching to full extraction in case of magnet quench. (Normally 90% of the cavern air is re-circulated by the ventilation system.) Minor modifications will concern the gas distribution, while the DSS action-matrix has to be refined according to the experience gained with operating the detector for a while. On the powering side, some LV power lines have been doubled and the final schematics of the UPS coverage for the counting rooms have been released. The most relevant inte...

  1. INFRASTRUCTURE

    CERN Multimedia

    A. Gaddi and P. Tropea

    2013-01-01

      Most of the CMS infrastructures at P5 will go through a heavy consolidation-work period during LS1. All systems, from the cryogenic plant of the superconducting magnet to the rack powering in the USC55 counting rooms, from the cooling circuits to the gas distribution, will undergo consolidation work. As announced in the last issue of the CMS Bulletin, we present here one of the consolidation projects of LS1: the installation of a new dry-gas plant for inner detectors inertion. So far the oxygen and humidity suppression inside the CMS Tracker and Pixel volumes were assured by flushing dry nitrogen gas evaporated from a large liquid nitrogen tank. For technical reasons, the maximum flow is limited to less than 100 m3/h and the cost of refilling the tank every two weeks with liquid nitrogen is quite substantial. The new dry-gas plant will supply up to 400 m3/h of dry nitrogen (or the same flow of dry air, during shut-downs) with a comparatively minimal operation cost. It has been evaluated that the...

  2. Life cycle analysis of energy supply infrastructure for conventional and electric vehicles

    International Nuclear Information System (INIS)

    Lucas, Alexandre; Alexandra Silva, Carla; Costa Neto, Rui

    2012-01-01

    Electric drive vehicle technologies are being considered as possible solutions to mitigate environmental problems and fossil fuels dependence. Several studies have used life cycle analysis technique, to assess energy use and CO 2 emissions, addressing fuels Well-to-Wheel life cycle or vehicle's materials Cradle-to-Grave. However, none has considered the required infrastructures for fuel supply. This study presents a methodology to evaluate energy use and CO 2 emissions from construction, maintenance and decommissioning of support infrastructures for electricity and fossil fuel supply of vehicles applied to Portugal case study. Using Global Warming Potential and Cumulative Energy Demand, three light-duty vehicle technologies were considered: Gasoline, Diesel and Electric. For fossil fuels, the extraction well, platform, refinery and refuelling stations were considered. For the Electric Vehicle, the Portuguese 2010 electric mix, grid and the foreseen charging point's network were studied. Obtained values were 0.6–1.5 gCO 2eq /km and 0.03–0.07 MJ eq /km for gasoline, 0.6–1.6 gCO 2eq /km and 0.02–0.06 MJ eq /km for diesel, 3.7–8.5 gCO 2eq /km and 0.06–0.17 MJ eq /km for EV. Monte Carlo technique was used for uncertainty analysis. We concluded that EV supply infrastructures are more carbon and energetic intensive. Contribution in overall vehicle LCA does not exceed 8%. - Highlights: ► ISO 14040 was applied to evaluate fuel supply infrastructures of ICE and EV. ► CED and GWP are used to assess the impact on WTW and CTG stages. ► EV chargers rate and ICE stations' lifetime influence uncertainty the most. ► EV facilities are more carbon and energetic intense than conventional fuels. ► Contribution of infrastructures in overall vehicle LCA does not exceed 8%.

  3. Evaluation of Urban Drainage Infrastructure: New York City Case Study

    Science.gov (United States)

    Hamidi, A.; Grossberg, M.; Khanbilvardi, R.

    2017-12-01

    Flood response in an urban area is the product of interactions of spatially and temporally varying rainfall and infrastructures. In urban areas, however, the complex sub-surface networks of tunnels, waste and storm water drainage systems are often inaccessible, pose challenges for modeling and prediction of the drainage infrastructure performance. The increased availability of open data in cities is an emerging information asset for a better understanding of the dynamics of urban water drainage infrastructure. This includes crowd sourced data and community reporting. A well-known source of this type of data is the non-emergency hotline "311" which is available in many US cities, and may contain information pertaining to the performance of physical facilities, condition of the environment, or residents' experience, comfort and well-being. In this study, seven years of New York City 311 (NYC311) call during 2010-2016 is employed, as an alternative approach for identifying the areas of the city most prone to sewer back up flooding. These zones are compared with the hydrologic analysis of runoff flooding zones to provide a predictive model for the City. The proposed methodology is an example of urban system phenomenology using crowd sourced, open data. A novel algorithm for calculating the spatial distribution of flooding complaints across NYC's five boroughs is presented in this study. In this approach, the features that represent reporting bias are separated from those that relate to actual infrastructure system performance. The sewer backup results are assessed with the spatial distribution of runoff in NYC during 2010-2016. With advances in radar technologies, a high spatial-temporal resolution data set for precipitation is available for most of the United States that can be implemented in hydrologic analysis of dense urban environments. High resolution gridded Stage IV radar rainfall data along with the high resolution spatially distributed land cover data are

  4. Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation

    Science.gov (United States)

    Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.

    2017-09-01

    Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  5. COLLABORATIVE MULTI-SCALE 3D CITY AND INFRASTRUCTURE MODELING AND SIMULATION

    Directory of Open Access Journals (Sweden)

    M. Breunig

    2017-09-01

    Full Text Available Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  6. On the need for system alignment in large water infrastructure. Understanding infrastructure dynamics in Nairobi, Kenya

    Directory of Open Access Journals (Sweden)

    Pär Blomkvist

    2017-06-01

    Full Text Available In this article we contribute to the discussion of infrastructural change in Africa, and explore how a new theoretical perspective may offer a different, more comprehensive and historically informed understanding of the trend towards large water infrastructure in Africa. We examine the socio-technical dynamics of large water infrastructures in Nairobi, Kenya, in a longer historical perspective using two concepts that we call intra-systemic alignment and inter-level alignment. Our theoretical perspective is inspired by Large Technical Systems (LTS and Multi-Level Perspective (MLP. While inter-level alignment focuses on the process of aligning the technological system at the three levels of niche, regime and landscape, intra-systemic alignment deals with how components within the regime are harmonised and standardised to fit with each other. We pay special attention to intrasystemic alignment between the supply side and the demand side, or as we put it, upstream and downstream components of a system. In narrating the history of water supply in Nairobi, we look at both the upstream (largescale supply and downstream activities (distribution and payment, and compare the Nairobi case with European history of large infrastructures. We emphasise that regime actors in Nairobi have dealt with the issues of alignment mainly to facilitate and expand upstream activities, while concerning downstream activities they have remained incapable of expanding service and thus integrating the large segment of low-income consumers. We conclude that the present surge of large-scale water investment in Nairobi is the result of sector reforms that enabled the return to a long tradition – a 'Nairobi style' – of upstream investment mainly benefitting the highincome earners. Our proposition is that much more attention needs to be directed at inter-level alignment at the downstream end of the system, to allow the creation of niches aligned to the regime.

  7. Access control infrastructure for on-demand provisioned virtualised infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; de Laat, C.; Smari, W.W.; Fox, G.C.

    2011-01-01

    Cloud technologies are emerging as a new way of provisioning virtualised computing and infrastructure services on-demand for collaborative projects and groups. Security in provisioning virtual infrastructure services should address two general aspects: supporting secure operation of the provisioning

  8. Sustainable Water Infrastructure

    Science.gov (United States)

    Resources for state and local environmental and public health officials, and water, infrastructure and utility professionals to learn about sustainable water infrastructure, sustainable water and energy practices, and their role.

  9. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  10. Using decommissioned offshore oil/gas platforms for nuclear/RO desalination: the ONDP (Offshore Nuclear Desalination Platform)

    International Nuclear Information System (INIS)

    Nagar, Ankesh

    2010-01-01

    shore by cables alongside the pipeline as additional bounty. The nuclear submarines and ships have been sailing the oceans of the world with these reactors and have proven safe. The other non-conventional energy sources like windmills and wave energy generation have also been tried on oil platforms but the magnitude of energy generation and desalination is incomparable with nuclear energy. The KLT 40S reactors are compact easy to transport and ship, built with excellent safety mechanisms, efficient and are made with 'plug and play' philosophy. The use of non weapon grade uranium makes them ideal for installing them offshore with existing security. They have a proven safety record as well and are cost effective in long run. These 'to be decommissioned' oil platforms are also ideal for DEMWAX (Reverse Osmosis) plants which instead of floats can be anchored at the base of the platform where they meet the required gravity, current and pressure. The location of oil platforms minimizes biofouling and reduces power requirement. Brine plume is also taken care with the available wide ocean floor and strong currents. The challenge is to integrate ready oil platform infrastructure with proven safe nuclear technology and water-management measures, to put them to practice with modification in an Offshore Nuclear Desalination Platform (ONDP). (author)

  11. Integrating Urban Infrastructure and Health System Impact Modeling for Disasters and Mass-Casualty Events

    Science.gov (United States)

    Balbus, J. M.; Kirsch, T.; Mitrani-Reiser, J.

    2017-12-01

    Over recent decades, natural disasters and mass-casualty events in United States have repeatedly revealed the serious consequences of health care facility vulnerability and the subsequent ability to deliver care for the affected people. Advances in predictive modeling and vulnerability assessment for health care facility failure, integrated infrastructure, and extreme weather events have now enabled a more rigorous scientific approach to evaluating health care system vulnerability and assessing impacts of natural and human disasters as well as the value of specific interventions. Concurrent advances in computing capacity also allow, for the first time, full integration of these multiple individual models, along with the modeling of population behaviors and mass casualty responses during a disaster. A team of federal and academic investigators led by the National Center for Disaster Medicine and Public Health (NCDMPH) is develoing a platform for integrating extreme event forecasts, health risk/impact assessment and population simulations, critical infrastructure (electrical, water, transportation, communication) impact and response models, health care facility-specific vulnerability and failure assessments, and health system/patient flow responses. The integration of these models is intended to develop much greater understanding of critical tipping points in the vulnerability of health systems during natural and human disasters and build an evidence base for specific interventions. Development of such a modeling platform will greatly facilitate the assessment of potential concurrent or sequential catastrophic events, such as a terrorism act following a severe heat wave or hurricane. This presentation will highlight the development of this modeling platform as well as applications not just for the US health system, but also for international science-based disaster risk reduction efforts, such as the Sendai Framework and the WHO SMART hospital project.

  12. Experimental Platform for Internet Contingencies

    OpenAIRE

    SOUPIONIS IOANNIS; BENOIST Thierry

    2016-01-01

    Decentralized Critical infrastructure management systems will play a key role in reducing costs and improving the quality of service of industrial processes, such as electricity production and transportation. The recent malwares (e.g. Stuxnet) revealed several vulnerabilities in today's Distributed Control systems (DCS), but most importantly they highlighted the lack of an efficient scientific approach to conduct experiments that measure the impact of cyber threats on both the physical an...

  13. Water and Carbon Footprints for Sustainability Analysis of Urban Infrastructure - abstract

    Science.gov (United States)

    Water and transportation infrastructures define spatial distribution of urban population and economic activities. In this context, energy and water consumed per capita are tangible measures of how efficient water and transportation systems are constructed and operated. At a hig...

  14. Growing the Blockchain information infrastructure

    DEFF Research Database (Denmark)

    Jabbar, Karim; Bjørn, Pernille

    2017-01-01

    In this paper, we present ethnographic data that unpacks the everyday work of some of the many infrastructuring agents who contribute to creating, sustaining and growing the Blockchain information infrastructure. We argue that this infrastructuring work takes the form of entrepreneurial actions......, which are self-initiated and primarily directed at sustaining or increasing the initiator’s stake in the emerging information infrastructure. These entrepreneurial actions wrestle against the affordances of the installed base of the Blockchain infrastructure, and take the shape of engaging...... or circumventing activities. These activities purposefully aim at either influencing or working around the enablers and constraints afforded by the Blockchain information infrastructure, as its installed base is gaining inertia. This study contributes to our understanding of the purpose of infrastructuring, seen...

  15. Temperature profiles from MBT casts from a World-Wide distribution from the ALASKA and other platforms from 1943-02-02 to 1964-10-10 (NODC Accession 9200027)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature profile data were collected from MBT casts from a a World-Wide distribution. Data were collected from the ALASKA and other platforms from 02 February...

  16. Spatial data infrastructures at work analysing the spatial enablement of public sector processes

    CERN Document Server

    Dessers, Ezra

    2013-01-01

    In 'Spatial Data Infrastructures at Work', Ezra Dessers introduces spatial enablement as a key concept to describe the realisation of SDI objectives in the context of individual public sector processes. Drawing on four years of research, Dessers argues that it has become essential, even unavoidable, to manage and (re)design inter-organisational process chains in order to further advance the role of SDIs as an enabling platform for a spatially enabled society. Detailed case studies illustrate that the process he describes is the setting in which one can see the SDI at work.

  17. Development of Bioinformatics Infrastructure for Genomics Research.

    Science.gov (United States)

    Mulder, Nicola J; Adebiyi, Ezekiel; Adebiyi, Marion; Adeyemi, Seun; Ahmed, Azza; Ahmed, Rehab; Akanle, Bola; Alibi, Mohamed; Armstrong, Don L; Aron, Shaun; Ashano, Efejiro; Baichoo, Shakuntala; Benkahla, Alia; Brown, David K; Chimusa, Emile R; Fadlelmola, Faisal M; Falola, Dare; Fatumo, Segun; Ghedira, Kais; Ghouila, Amel; Hazelhurst, Scott; Isewon, Itunuoluwa; Jung, Segun; Kassim, Samar Kamal; Kayondo, Jonathan K; Mbiyavanga, Mamana; Meintjes, Ayton; Mohammed, Somia; Mosaku, Abayomi; Moussa, Ahmed; Muhammd, Mustafa; Mungloo-Dilmohamud, Zahra; Nashiru, Oyekanmi; Odia, Trust; Okafor, Adaobi; Oladipo, Olaleye; Osamor, Victor; Oyelade, Jellili; Sadki, Khalid; Salifu, Samson Pandam; Soyemi, Jumoke; Panji, Sumir; Radouani, Fouzia; Souiai, Oussama; Tastan Bishop, Özlem

    2017-06-01

    Although pockets of bioinformatics excellence have developed in Africa, generally, large-scale genomic data analysis has been limited by the availability of expertise and infrastructure. H3ABioNet, a pan-African bioinformatics network, was established to build capacity specifically to enable H3Africa (Human Heredity and Health in Africa) researchers to analyze their data in Africa. Since the inception of the H3Africa initiative, H3ABioNet's role has evolved in response to changing needs from the consortium and the African bioinformatics community. H3ABioNet set out to develop core bioinformatics infrastructure and capacity for genomics research in various aspects of data collection, transfer, storage, and analysis. Various resources have been developed to address genomic data management and analysis needs of H3Africa researchers and other scientific communities on the continent. NetMap was developed and used to build an accurate picture of network performance within Africa and between Africa and the rest of the world, and Globus Online has been rolled out to facilitate data transfer. A participant recruitment database was developed to monitor participant enrollment, and data is being harmonized through the use of ontologies and controlled vocabularies. The standardized metadata will be integrated to provide a search facility for H3Africa data and biospecimens. Because H3Africa projects are generating large-scale genomic data, facilities for analysis and interpretation are critical. H3ABioNet is implementing several data analysis platforms that provide a large range of bioinformatics tools or workflows, such as Galaxy, the Job Management System, and eBiokits. A set of reproducible, portable, and cloud-scalable pipelines to support the multiple H3Africa data types are also being developed and dockerized to enable execution on multiple computing infrastructures. In addition, new tools have been developed for analysis of the uniquely divergent African data and for

  18. Cross-Platform Mobile Application Development: A Pattern-Based Approach

    Science.gov (United States)

    2012-03-01

    TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE Cross-Platform Mobile Application Development: A Pattern-Based Approach 5. FUNDING...for public release; distribution is unlimited CROSS-PLATFORM MOBILE APPLICATION DEVELOPMENT: A PATTERN-BASED APPROACH Christian G. Acord...occurring design problems. We then discuss common approaches to mobile development, including common aspects of mobile application development, including

  19. Enabling content distribution in vehicular ad hoc networks

    CERN Document Server

    Luan, Tom H; Bai, Fan

    2014-01-01

    This SpringerBrief presents key enabling technologies and state-of-the-art research on delivering efficient content distribution services to fast moving vehicles. It describes recent research developments and proposals towards the efficient, resilient and scalable content distribution to vehicles through both infrastructure-based and infrastructure-less vehicular networks. The authors focus on the rich multimedia services provided by vehicular environment content distribution including vehicular communications and media playback, giving passengers many infotainment applications. Common problem

  20. Matching of Energy Provisions in Multihop Wireless Infra-Structures

    Directory of Open Access Journals (Sweden)

    Rui Teng

    2016-01-01

    Full Text Available Recently there have been large advances in energy technologies for battery-operated systems, including green energy resources and high capacity batteries. The effective use of battery energy resources in wireless infrastructure networks to improve the versatility and reliability of wireless communications is an important issue. Emerging applications of smart cities, Internet of Things (IoT, and emergency responses highly rely on the basic communication network infrastructures that enable ubiquitous network connections. However, energy consumption by nodes in a wireless infrastructure network depends on the transmissions of other nodes in the network. Considering this inter-dependence is necessary to achieve efficient provision of energy in wireless networks. This paper studies the issue of energy provision for wireless relay nodes in Wireless Multihop Infrastructures (WMI assuming constraints on the total energy provision. We introduce a scheme of Energy Provision Matching (Matching-EP for WMI which optimizes energy provision based on matching of energy provision with estimates of differentiated position-dependent energy consumption by wireless nodes distributed in the network. The evaluation results show that Matching-EP with 4%–34% improvement in energy matching degree enables 10%–40% improvement of the network lifetime, and 5%–40% improvement of packet delivery compared with conventional WMI networks.

  1. INFRASTRUCTURE

    CERN Document Server

    A.Gaddi

    2011-01-01

    Between the end of March to June 2011, there has been no detector downtime during proton fills due to CMS Infrastructures failures. This exceptional performance is a clear sign of the high quality work done by the CMS Infrastructures unit and its supporting teams. Powering infrastructure At the end of March, the EN/EL group observed a problem with the CMS 48 V system. The problem was a lack of isolation between the negative (return) terminal and earth. Although at that moment we were not seeing any loss of functionality, in the long term it would have led to severe disruption to the CMS power system. The 48 V system is critical to the operation of CMS: in addition to feeding the anti-panic lights, essential for the safety of the underground areas, it powers all the PLCs (Twidos) that control AC power to the racks and front-end electronics of CMS. A failure of the 48 V system would bring down the whole detector and lead to evacuation of the cavern. EN/EL technicians have made an accurate search of the fault, ...

  2. INFRASTRUCTURE

    CERN Multimedia

    A. Gaddi and P. Tropea

    2012-01-01

    The CMS Infrastructures teams are preparing for the LS1 activities. A long list of maintenance, consolidation and upgrade projects for CMS Infrastructures is on the table and is being discussed among Technical Coordination and sub-detector representatives. Apart from the activities concerning the cooling infrastructures (see below), two main projects have started: the refurbishment of the SX5 building, from storage area to RP storage and Muon stations laboratory; and the procurement of a new dry-gas (nitrogen and dry air) plant for inner detector flushing. We briefly present here the work done on the first item, leaving the second one for the next CMS Bulletin issue. The SX5 building is entering its third era, from main assembly building for CMS from 2000 to 2007, to storage building from 2008 to 2012, to RP storage and Muon laboratory during LS1 and beyond. A wall of concrete blocks has been erected to limit the RP zone, while the rest of the surface has been split between the ME1/1 and the CSC/DT laborat...

  3. Multi-ASIP Platform Synthesis for Event-Triggered Applications with Cost/Performance Trade-offs

    DEFF Research Database (Denmark)

    Gangadharan, Deepak; Micconi, Laura; Pop, Paul

    2013-01-01

    In this paper, we propose a technique to synthesize a cost-efficient distributed platform consisting of multiple Application Specific Instruction Set Processors (multi-ASIPs) running applications with strict timing constraints. Multi-ASIP platform synthesis is a non-trivial task for two reasons....... Firstly, we need to know the WCET of tasks in target applications to derive platforms (including synthesized ASIPs) in which the tasks are schedulable. However, the WCET of tasks can be known only after the ASIPs are synthesized. We break this circular dependency by using a probability distribution...

  4. An open-access platform for camera-trapping data

    Directory of Open Access Journals (Sweden)

    Mario César Lavariega

    2018-02-01

    Full Text Available In southern Mexico, local communities have been playing important roles in the design and collection of wildlife data through camera-trapping in community-based monitoring of biodiversity projects. However, the methods used to store the data have limited their use in matters of decision-making and research. Thus, we present the Platform for Community-based Monitoring of Biodiversity (PCMB, a repository, which allows storage, visualization, and downloading of photographs captured by community-based monitoring of biodiversity projects in protected areas of southern Mexico. The platform was developed using agile software development with extensive interaction between computer scientists and biologists. System development included gathering data, design, built, database and attributes creation, and quality control. The PCMB currently contains 28,180 images of 6478 animals (69.4% mammals and 30.3% birds. Of the 32 species of mammals recorded in 18 PA since 2012, approximately a quarter of all photographs were of white-tailed deer (Odocoileus virginianus. Platforms permitting access to camera-trapping data are a valuable step in opening access to data of biodiversity; the PCMB is a practical new tool for wildlife management and research with data generated through local participation. Thus, this work encourages research on the data generated through the community-based monitoring of biodiversity projects in protected areas, to provide an important information infrastructure for effective management and conservation of wildlife.

  5. Temperature profile data from MBT casts from NAUKA and other platforms in a World-wide distribution from 26 July 1966 to 09 September 1990 (NODC Accession 0000228)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature profile data were collected using MBT casts in a World-wide distribution from the NAUKA, FIOLENT, LESNOYE, and other platforms from 26 July 1966 to 09...

  6. Structures and Infrastructures of International R&D Networks: A Capability Maturity Perspective

    DEFF Research Database (Denmark)

    Niang, Mohamed; Wæhrens, Brian Vejrum

    Purpose: This paper explores the process towards globally distributing R&D activities with an emphasis on organizational maturity. It discusses emerging configurations by asking how the structure and infrastructure of international R&D networks evolve along with the move from a strong R&D center...... to dispersed development. Design/Methodology/Approach: This is a qualitative study of the process of distributing R&D. By comparing selected firms, the researchers identify a pattern of dispersion of R&D activities in three Danish firms. Findings and Discussion: Drawing from the case studies, the researchers...... present a capability maturity model. Furthermore, understanding the interaction between new structures and infrastructures of the dispersed networks is viewed as a key requirement for developing organizational capabilities and formulating adequate strategies that leverage dispersed R&D. Organizational...

  7. Failure to adapt infrastructure: is legal liability lurking for infrastructure stakeholders

    International Nuclear Information System (INIS)

    Gherbaz, S.

    2009-01-01

    'Full text:' Very little attention has been paid to potential legal liability for failing to adapt infrastructure to climate change-related risk. Amendments to laws, building codes and standards to take into account the potential impact of climate change on infrastructure assets are still at least some time away. Notwithstanding that amendments are still some time away, there is a real risk to infrastructure stakeholders for failing to adapt. The legal framework in Canada currently permits a court, in the right circumstances, to find certain infrastructure stakeholders legally liable for personal injury and property damage suffered by third parties as a result of climate change effects. This presentation will focus on legal liability of owners (governmental and private sector), engineers, architects and contractors for failing to adapt infrastructure assets to climate change risk. It will answer commonly asked questions such as: Can I avoid liability by complying with existing laws, codes and standards? Do engineers and architects have a duty to warn owners that existing laws, codes and standards do not, in certain circumstances, adequately take into account the impact of climate change-related risks on an infrastructure asset? And do professional liability insurance policies commonly maintained by architects, engineers and other design professionals provide coverage for a design professional's failure to take into account climate change-related risks?. (author)

  8. Increasing the resilience and security of the United States' power infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Happenny, Sean F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-08-01

    The United States' power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power infrastructure control and distribution paradigms by utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Understanding how these systems behave in real-world conditions will lead to new ways to make our power infrastructure more resilient and secure. Demonstrating security in embedded systems is another research area PNNL is tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the aging networks protecting them are becoming easier to attack.

  9. 802.11 Wireless Infrastructure To Enhance Medical Response to Disasters

    Science.gov (United States)

    Arisoylu, Mustafa; Mishra, Rajesh; Rao, Ramesh; Lenert, Leslie A.

    2005-01-01

    802.11 (WiFi) is a well established network communications protocol that has wide applicability in civil infrastructure. This paper describes research that explores the design of 802.11 networks enhanced to support data communications in disaster environments. The focus of these efforts is to create network infrastructure to support operations by Metropolitan Medical Response System (MMRS) units and Federally-sponsored regional teams that respond to mass casualty events caused by a terrorist attack with chemical, biological, nuclear or radiological weapons or by a hazardous materials spill. In this paper, we describe an advanced WiFi-based network architecture designed to meet the needs of MMRS operations. This architecture combines a Wireless Distribution Systems for peer-to-peer multihop connectivity between access points with flexible and shared access to multiple cellular backhauls for robust connectivity to the Internet. The architecture offers a high bandwidth data communications infrastructure that can penetrate into buildings and structures while also supporting commercial off-the-shelf end-user equipment such as PDAs. It is self-configuring and is self-healing in the event of a loss of a portion of the infrastructure. Testing of prototype units is ongoing. PMID:16778990

  10. A Distributed Approach towards Improved Dissemination Protocol for Smooth Handover in MediaSense IoT Platform

    Directory of Open Access Journals (Sweden)

    Shabir Ahmad

    2018-05-01

    Full Text Available Recently, the Internet has been utilized by many applications to convey time-sensitive messages. The persistently expanding Internet coverage and its easy accessibility have offered to ascend to a problem which was once regarded as not essential to contemplate. Nowadays, the Internet has been utilized by many applications to convey time-sensitive messages. Wireless access points have widely been used but these access points have limitations regarding area coverage. So for covering a wider space, various access points need to be introduced. Therefore, when the user moves to some other place, the devices expected to switch between access points. Packet loss amid the handovers is a trivial issue. MediaSense is an Internet of Things distributed architecture enabling the development of the IoT application faster. It deals with this trivial handover issue by utilizing a protocol called Distributed Context eXchange Protocol. However, this protocol is centralized in nature and also suffers in a scenario when both sender and receiver address change simultaneously. This paper presents a mechanism to deal with this scenario and presents a distributed solution to deal with this issue within the MediaSense platform. The proposed protocol improves dissemination using retransmission mechanism to diminish packet loss. The proposed protocol has been delineated with a proof of concept chat application and the outcomes have indicated a significant improvement in terms of packet loss.

  11. Temperature profile data from MBT casts from NAUKA and other platforms in a World-wide distribution from 18 June 1970 to 05 May 1989 (NODC Accession 0000229)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Temperature profile data were collected using MBT casts in a World-wide distribution from the NAUKA, AELITA, LESNOYE, and other platforms from 18 June 1970 to 05 May...

  12. Autonomic Semantic-Based Context-Aware Platform for Mobile Applications in Pervasive Environments

    Directory of Open Access Journals (Sweden)

    Adel Alti

    2016-09-01

    Full Text Available Currently, the field of smart-* (home, city, health, tourism, etc. is naturally heterogeneous and multimedia oriented. In such a domain, there is an increasing usage of heterogeneous mobile devices, as well as captors transmitting data (IoT. They are highly connected and can be used for many different services, such as to monitor, to analyze and to display information to users. In this context, data management and adaptation in real time are becoming a challenging task. More precisely, at one time, it is necessary to handle in a dynamic, intelligent and transparent framework various data provided by multiple devices with several modalities. This paper presents a Kali-Smart platform, which is an autonomic semantic-based context-aware platform. It is based on semantic web technologies and a middleware providing autonomy and reasoning facilities. Moreover, Kali-Smart is generic and, as a consequence, offers to users a flexible infrastructure where they can easily control various interaction modalities of their own situations. An experimental study has been made to evaluate the performance and feasibility of the proposed platform.

  13. Electric vehicle charging infrastructure assignment and power grid impacts assessment in Beijing

    International Nuclear Information System (INIS)

    Liu, Jian

    2012-01-01

    This paper estimates the charging demand of an early electric vehicle (EV) market in Beijing and proposes an assignment model to distribute charging infrastructure. It finds that each type of charging infrastructure has its limitation, and integration is needed to offer a reliable charging service. It also reveals that the service radius of fast charging stations directly influences the final distribution pattern and an infrastructure deployment strategy with short service radius for fast charging stations has relatively fewer disturbances on the power grid. Additionally, although the adoption of electric vehicles will cause an additional electrical load on the Beijing's power grid, this additional load can be accommodated by the current grid's capacity via the charging time management and the battery swap strategy. - Highlight: ► Charging posts, fast charging stations, and battery swap stations should be integrated. ► Charging posts at home parking places will take a major role in a charging network. ► A service radius of 2 km is proposed for fast charging stations deployment. ► The additional charging load from EVs can be accommodated by charging time management.

  14. Improving global data infrastructures for more effective and scalable analysis of Earth and environmental data: the Australian NCI NERDIP Approach

    Science.gov (United States)

    Evans, Ben; Wyborn, Lesley; Druken, Kelsey; Richards, Clare; Trenham, Claire; Wang, Jingbo; Rozas Larraondo, Pablo; Steer, Adam; Smillie, Jon

    2017-04-01

    different disciplines and research communities to invoke new forms of analysis and discovery in an increasingly complex data-rich environment. Driven by the heterogeneity of Earth and environmental datasets, NCI developed a Data Quality/Data Assurance Strategy to ensure consistency is maintained within and across all datasets, as well as functionality testing to ensure smooth interoperability between products, tools, and services. This is particularly so for collections that contain data generated from multiple data acquisition campaigns, often using instruments and models that have evolved over time. By implementing the NCI Data Quality Strategy we have seen progressive improvement in the integration and quality of the datasets across the different subject domains, and through this, the ease by which the users can access data from this major data infrastructure. By both adhering to international standards and also contributing to extensions of these standards, data from the NCI NERDIP platform can be federated with data from other globally distributed data repositories and infrastructures. The NCI approach builds on our experience working with the astronomy and climate science communities, which have been internationally coordinating such interoperability standards within their disciplines for some years. The results of our work so far demonstrate more could be done in the Earth science, solid earth and environmental communities, particularly through establishing better linkages between international/national community efforts such as EPOS, ENVRIplus, EarthCube, AuScope and the Research Data Alliance.

  15. Distributed Data Management Service for VPH Applications

    NARCIS (Netherlands)

    Koulouzis, S.; Belloum, A.; Bubak, M.; Lamata, P.; Nolte, D.; Vasyunin, D.; de Laat, C.

    2016-01-01

    For many medical applications, it's challenging to access large datasets, which are often hosted across different domains on heterogeneous infrastructures. Homogenizing the infrastructure to simplify data access is unrealistic; therefore, it's important to develop distributed storage that doesn't

  16. Middleware for the next generation Grid infrastructure

    CERN Document Server

    Laure, E; Prelz, F; Beco, S; Fisher, S; Livny, M; Guy, L; Barroso, M; Buncic, P; Kunszt, Peter Z; Di Meglio, A; Aimar, A; Edlund, A; Groep, D; Pacini, F; Sgaravatto, M; Mulmo, O

    2005-01-01

    The aim of the EGEE (Enabling Grids for E-Science in Europe) project is to create a reliable and dependable European Grid infrastructure for e-Science. The objective of the EGEE Middleware Re-engineering and Integration Research Activity is to provide robust middleware components, deployable on several platforms and operating systems, corresponding to the core Grid services for resource access, data management, information collection, authentication & authorization, resource matchmaking and brokering, and monitoring and accounting. For achieving this objective, we developed an architecture and design of the next generation Grid middleware leveraging experiences and existing components essentially from AliEn, EDG, and VDT. The architecture follows the service breakdown developed by the LCG ARDA group. Our strategy is to do as little original development as possible but rather re-engineer and harden existing Grid services. The evolution of these middleware components towards a Service Oriented Architecture ...

  17. MFC Communications Infrastructure Study

    Energy Technology Data Exchange (ETDEWEB)

    Michael Cannon; Terry Barney; Gary Cook; George Danklefsen, Jr.; Paul Fairbourn; Susan Gihring; Lisa Stearns

    2012-01-01

    Unprecedented growth of required telecommunications services and telecommunications applications change the way the INL does business today. High speed connectivity compiled with a high demand for telephony and network services requires a robust communications infrastructure.   The current state of the MFC communication infrastructure limits growth opportunities of current and future communication infrastructure services. This limitation is largely due to equipment capacity issues, aging cabling infrastructure (external/internal fiber and copper cable) and inadequate space for telecommunication equipment. While some communication infrastructure improvements have been implemented over time projects, it has been completed without a clear overall plan and technology standard.   This document identifies critical deficiencies with the current state of the communication infrastructure in operation at the MFC facilities and provides an analysis to identify needs and deficiencies to be addressed in order to achieve target architectural standards as defined in STD-170. The intent of STD-170 is to provide a robust, flexible, long-term solution to make communications capabilities align with the INL mission and fit the various programmatic growth and expansion needs.

  18. A Multiagent Platform for Developments of Accounting Intelligent Applications

    Directory of Open Access Journals (Sweden)

    Adrian LUPAŞC

    2008-01-01

    Full Text Available AOP – Agent Oriented Programming – is a new software paradigm thatbrings many concepts from the artificial intelligence. This paper provides a shortoverview of the JADE software platform and the principal’s componentsconstituting its distributed architecture. Furthermore, it describes how to launch theplatform with the command–line options and how to experiment with the maingraphical tools of this platform.

  19. Design of RFID Mesh Network for Electric Vehicle Smart Charging Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ching-Yen; Shepelev, Aleksey; Qiu, Charlie; Chu, Chi-Cheng; Gadh, Rajit

    2013-09-04

    With an increased number of Electric Vehicles (EVs) on the roads, charging infrastructure is gaining an ever-more important role in simultaneously meeting the needs of the local distribution grid and of EV users. This paper proposes a mesh network RFID system for user identification and charging authorization as part of a smart charging infrastructure providing charge monitoring and control. The Zigbee-based mesh network RFID provides a cost-efficient solution to identify and authorize vehicles for charging and would allow EV charging to be conducted effectively while observing grid constraints and meeting the needs of EV drivers

  20. Infrastructure: concept, types and value

    Directory of Open Access Journals (Sweden)

    Alexander E. Lantsov

    2013-01-01

    Full Text Available Researches of influence of infrastructure on the economic growth and development of the countries gained currency. However the majority of authors drop the problem of definition of accurate concept of studied object and its criteria out. In the given article various approaches in the definition of «infrastructure» concept, criterion and the characteristics of infrastructure distinguishing it from other capital assets are presented. Such types of infrastructure, as personal, institutional, material, production, social, etc. are considered. Author’s definition of infrastructure is given.