WorldWideScience

Sample records for infrastructure rti software

  1. RTI Wiki

    Data.gov (United States)

    National Aeronautics and Space Administration — Under files is the link to the RTI wiki. Research Test and Integration (RTI) focuses on the development of long term plans for test and integration opportunities for...

  2. Software and hardware infrastructure for research in electrophysiology.

    Science.gov (United States)

    Mouček, Roman; Ježek, Petr; Vařeka, Lukáš; Rondík, Tomáš; Brůha, Petr; Papež, Václav; Mautner, Pavel; Novotný, Jiří; Prokop, Tomáš; Stěbeták, Jan

    2014-01-01

    As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly. This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research. After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software.

  3. KTM Tokamak operation scenarios software infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Pavlov, V.; Baystrukov, K.; Golobkov, YU.; Ovchinnikov, A.; Meaentsev, A.; Merkulov, S.; Lee, A. [National Research Tomsk Polytechnic University, Tomsk (Russian Federation); Tazhibayeva, I.; Shapovalov, G. [National Nuclear Center (NNC), Kurchatov (Kazakhstan)

    2014-10-15

    One of the largest problems for tokamak devices such as Kazakhstan Tokamak for Material Testing (KTM) is the operation scenarios' development and execution. Operation scenarios may be varied often, so a convenient hardware and software solution is required for scenario management and execution. Dozens of diagnostic and control subsystems with numerous configuration settings may be used in an experiment, so it is required to automate the subsystem configuration process to coordinate changes of the related settings and to prevent errors. Most of the diagnostic and control subsystems software at KTM was unified using an extra software layer, describing the hardware abstraction interface. The experiment sequence was described using a command language. The whole infrastructure was brought together by a universal communication protocol supporting various media, including Ethernet and serial links. The operation sequence execution infrastructure was used at KTM to carry out plasma experiments.

  4. Software and Hardware Infrastructure for Research in Electrophysiology

    Directory of Open Access Journals (Sweden)

    Roman eMouček

    2014-03-01

    Full Text Available As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly.This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research.After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software.

  5. Modular Infrastructure for Rapid Flight Software Development

    Science.gov (United States)

    Pires, Craig

    2010-01-01

    This slide presentation reviews the use of modular infrastructure to assist in the development of flight software. A feature of this program is the use of model based approach for application unique software. A review of two programs that this approach was use on are: the development of software for Hover Test Vehicle (HTV), and Lunar Atmosphere and Dust Environment Experiment (LADEE).

  6. National software infrastructure for lattice gauge theory

    International Nuclear Information System (INIS)

    Brower, Richard C

    2005-01-01

    The current status of the SciDAC software infrastructure project for lattice gauge theory is summarized. This includes the the design of a QCD application programmers interface (API) that allows existing and future codes to be run efficiently on Terascale hardware facilities and to be rapidly ported to new dedicated or commercial platforms. The critical components of the API have been implemented and are in use on the US QCDOC hardware at BNL and on both the switched and mesh architecture Pentium 4 clusters at Fermi National Accelerator Laboratory (FNAL) and Thomas Jefferson National Accelerator Facility (JLab). Future software infrastructure requirements and research directions are also discussed

  7. Modernising ATLAS Software Build Infrastructure

    CERN Document Server

    Ritsch, Elmar; The ATLAS collaboration

    2017-01-01

    In the last year ATLAS has radically updated its software development infrastructure hugely reducing the complexity of building releases and greatly improving build speed, flexibility and code testing. The first step in this transition was the adoption of CMake as the software build system over the older CMT. This required the development of an automated translation from the old system to the new, followed by extensive testing and improvements. This resulted in a far more standard build process that was married to the method of building ATLAS software as a series of $12$ separate projects from Subversion. We then proceeded with a migration of the code base from Subversion to Git. As the Subversion repository had been structured to manage each package more or less independently there was no simple mapping that could be used to manage the migration into Git. Instead a specialist set of scripts that captured the software changes across official software releases was developed. With some clean up of the repositor...

  8. An integrated infrastructure in support of software development

    International Nuclear Information System (INIS)

    Antonelli, S; Bencivenni, M; De Girolamo, D; Giacomini, F; Longo, S; Manzali, M; Veraldi, R; Zani, S

    2014-01-01

    This paper describes the design and the current state of implementation of an infrastructure made available to software developers within the Italian National Institute for Nuclear Physics (INFN) to support and facilitate their daily activity. The infrastructure integrates several tools, each providing a well-identified function: project management, version control system, continuous integration, dynamic provisioning of virtual machines, efficiency improvement, knowledge base. When applicable, access to the services is based on the INFN-wide Authentication and Authorization Infrastructure. The system is being installed and progressively made available to INFN users belonging to tens of sites and laboratories and will represent a solid foundation for the software development efforts of the many experiments and projects that see the involvement of the Institute. The infrastructure will be beneficial especially for small- and medium-size collaborations, which often cannot afford the resources, in particular in terms of know-how, needed to set up such services.

  9. Modernising ATLAS Software Build Infrastructure

    CERN Document Server

    Gaycken, Goetz; The ATLAS collaboration

    2017-01-01

    In the last year ATLAS has radically updated its software development infrastructure hugely reducing the complexity of building releases and greatly improving build speed, flexibility and code testing. The first step in this transition was the adoption of CMake as the software build system over the older CMT. This required the development of an automated translation from the old system to the new, followed by extensive testing and improvements. This resulted in a far more standard build process that was married to the method of building ATLAS software as a series of 12 separate projects from SVN. We then proceeded with a migration of its code base from SVN to git. As the SVN repository had been structured to manage each package more or less independently there was no simple mapping that could be used to manage the migration into git. Instead a specialist set of scripts that captured the software changes across official software releases was developed. With some clean up of the repository and the policy of onl...

  10. Service software engineering for innovative infrastructure for global financial services

    OpenAIRE

    MAAD , Soha; MCCARTHY , James B.; GARBAYA , Samir; Beynon , Meurig; Nagarajan , Rajagopal

    2010-01-01

    International audience; The recent financial crisis motivates our re-thinking of the engineering principles for service software and infrastructures intended to create business value in vital sectors. Existing monolithic, inwarddirected, cost insensitive and highly regulated technical and organizational infrastructures for financial services make it difficult for the domain to benefit from opportunities offered by new computing models such as cloud computing, software as a service, hardware a...

  11. Software Engineering Infrastructure in a Large Virtual Campus

    Science.gov (United States)

    Cristobal, Jesus; Merino, Jorge; Navarro, Antonio; Peralta, Miguel; Roldan, Yolanda; Silveira, Rosa Maria

    2011-01-01

    Purpose: The design, construction and deployment of a large virtual campus are a complex issue. Present virtual campuses are made of several software applications that complement e-learning platforms. In order to develop and maintain such virtual campuses, a complex software engineering infrastructure is needed. This paper aims to analyse the…

  12. Elementary School Psychologists and Response to Intervention (RTI)

    Science.gov (United States)

    Little, Suzanne; Marrs, Heath; Bogue, Heidi

    2017-01-01

    The implementation of Response to Intervention (RTI) in elementary schools may have important implications for school psychologists. Therefore, it is important to better understand how elementary school psychologists perceive RTI and what barriers to successful RTI implementation they identify. Although previous research has investigated the…

  13. Understanding RTI in Mathematics: Proven Methods and Applications

    Science.gov (United States)

    Gersten, Russell, Ed.; Newman-Gonchar, Rebecca, Ed.

    2011-01-01

    Edited by National Math Panel veteran Russell Gersten with contributions by all of the country's leading researchers on RTI and math, this cutting-edge text blends the existing evidence base with practical guidelines for RTI implementation. Current and future RTI coordinators, curriculum developers, math specialists, and department heads will get…

  14. HLA RTI performance evaluation

    CSIR Research Space (South Africa)

    Malinga, L

    2009-07-01

    Full Text Available size of the UDP packet of the network, namely 64 KB, when using the best effort mode. The performance analysis task of the different RTIs was undertaken for two reasons. The first is to re-establish a High Level Architecture (HLA) in our Research... exchange messages over the network with the RTI Gateway process, via TCP sockets or UDP in order to realise the services associated with the RTI. The allocation of CPU resources to the federate and the RTIA process is exclusively managed...

  15. Featureous: infrastructure for feature-centric analysis of object-oriented software

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2010-01-01

    The decentralized nature of collaborations between objects in object-oriented software makes it difficult to understand how user-observable program features are implemented and how their implementations relate to each other. It is worthwhile to improve this situation, since feature-centric program...... understanding and modification are essential during software evolution and maintenance. In this paper, we present an infrastructure built on top of the NetBeans IDE called Featureous that allows for rapid construction of tools for feature-centric analysis of object-oriented software. Our infrastructure...... encompasses a lightweight feature location mechanism, a number of analytical views and an API allowing for addition of third-party extensions. To form a common conceptual framework for future feature-centric extensions, we propose to structure feature centric analysis along three dimensions: perspective...

  16. The BaBar Software Architecture and Infrastructure

    International Nuclear Information System (INIS)

    Cosmo, Gabriele

    2003-01-01

    The BaBar experiment has in place since 1995 a software release system (SRT Software Release Tools) based on CVS (Concurrent Version System) which is in common for all the software developed for the experiment, online or offline, simulation or reconstruction. A software release is a snapshot of all BaBar code (online, offline, utilities, scripts, makefiles, etc.). This set of code is tested to work together, and is indexed by a release number (e.g., 6.8.2) so a user can refer to a particular release and get reproducible results. A release will involve particular versions of packages. A package generally consists of a set of code for a particular task, together with a GNU makefile, scripts and documentation. All BaBar software is maintained in AFS (Andrew File System) directories, so the code is accessible worldwide within the Collaboration. The combination SRT, CVS, AFS, has demonstrated to be a valid, powerful and efficient way of organizing the software infrastructure of a modern HEP experiment with collaborating Institutes distributed worldwide, both in a development and production phase

  17. Questions and Answers about RTI: A Guide to Success

    Science.gov (United States)

    Moran, Heather; Petruzzelli, Anthony

    2011-01-01

    As Response-to-Intervention (RTI) models continue to attract a great deal of attention, school and district leaders need to understand the structures needed, the personnel required, the challenges faced, and rewards realized from RTI. "Questions and Answers About RTI: A Guide to Success" is designed to guide a school or district through the…

  18. LHCb - Automated Testing Infrastructure for the Software Framework Gaudi

    CERN Multimedia

    Clemencic, M

    2009-01-01

    An extensive test suite is the first step towards the delivery of robust software, but it is not always easy to implement it, especially in projects with many developers. An easy to use and flexible infrastructure to use to write and execute the tests reduces the work each developer has to do to instrument his packages with tests. At the same time, the infrastructure gives the same look and feel to the tests and allows automated execution of the test suite. For Gaudi, we decided to develop the testing infrastructure on top of the free tool QMTest, used already in LCG Application Area for the routine tests run in the nightly build system. The high flexibility of QMTest allowed us to integrate it in the Gaudi package structure. A specialized test class and some utility functions have been developed to simplify the definition of a test for a Gaudi-based application. Thanks to the testing infrastructure described here, we managed to quickly extend the standard Gaudi test suite and add tests to the main LHCb appli...

  19. Effects of the Monoamine Uptake Inhibitors RTI-112 and RTI-113 on Cocaine- and Food-Maintained Responding in Rhesus Monkeys

    Science.gov (United States)

    SS, Negus; NK, Mello; HL, Kimmel; LL, Howell; FI, Carroll

    2009-01-01

    Cocaine blocks uptake of the monoamines dopamine, serotonin and norepinephrine, and monoamine uptake inhibitors constitute one class of drugs under consideration as candidate “agonist” medications for the treatment of cocaine abuse and dependence. The pharmacological selectivity of monoamine uptake inhibitors to block uptake of dopamine, serotonin and norepinephrine is one factor that may influence the efficacy and/or safety of these compounds as drug abuse treatment medications. To address this issue, the present study compared the effects of 7-day treatment with a non-selective monoamine uptake inhibitor (RTI-112) and a dopamine-selective uptake inhibitor (RTI-113) on cocaine- and food-maintained responding in rhesus monkeys. Monkeys (N=3) were trained to respond for cocaine injections (0.01 mg/kg/inj) and food pellets under a second-order schedule [FR2(VR16:S)] during alternating daily components of cocaine and food availability. Both RTI-112 (0.0032–0.01 mg/kg/hr) and RTI-113 (0.01–0.056 mg/kg/hr) produced dose-dependent, sustained and nearly complete elimination of cocaine self-administration. However, for both drugs, the potency to reduce cocaine self-administration was similar to the potency to reduce food-maintained responding. These findings do not support the hypothesis that pharmacological selectivity to block dopamine uptake is associated with behavioral selectivity to decrease cocaine- vs. food-maintained responding in rhesus monkeys. PMID:18755212

  20. Enabling software defined networking experiments in networked critical infrastructures

    Directory of Open Access Journals (Sweden)

    Béla Genge

    2014-05-01

    Full Text Available Nowadays, the fact that Networked Critical Infrastructures (NCI, e.g., power plants, water plants, oil and gas distribution infrastructures, and electricity grids, are targeted by significant cyber threats is well known. Nevertheless, recent research has shown that specific characteristics of NCI can be exploited in the enabling of more efficient mitigation techniques, while novel techniques from the field of IP networks can bring significant advantages. In this paper we explore the interconnection of NCI communication infrastructures with Software Defined Networking (SDN-enabled network topologies. SDN provides the means to create virtual networking services and to implement global networking decisions. It relies on OpenFlow to enable communication with remote devices and has been recently categorized as the “Next Big Technology”, which will revolutionize the way decisions are implemented in switches and routers. Therefore, the paper documents the first steps towards enabling an SDN-NCI and presents the impact of a Denial of Service experiment over traffic resulting from an XBee sensor network which is routed across an emulated SDN network.

  1. Auscope: Australian Earth Science Information Infrastructure using Free and Open Source Software

    Science.gov (United States)

    Woodcock, R.; Cox, S. J.; Fraser, R.; Wyborn, L. A.

    2013-12-01

    Since 2005 the Australian Government has supported a series of initiatives providing researchers with access to major research facilities and information networks necessary for world-class research. Starting with the National Collaborative Research Infrastructure Strategy (NCRIS) the Australian earth science community established an integrated national geoscience infrastructure system called AuScope. AuScope is now in operation, providing a number of components to assist in understanding the structure and evolution of the Australian continent. These include the acquisition of subsurface imaging , earth composition and age analysis, a virtual drill core library, geological process simulation, and a high resolution geospatial reference framework. To draw together information from across the earth science community in academia, industry and government, AuScope includes a nationally distributed information infrastructure. Free and Open Source Software (FOSS) has been a significant enabler in building the AuScope community and providing a range of interoperable services for accessing data and scientific software. A number of FOSS components have been created, adopted or upgraded to create a coherent, OGC compliant Spatial Information Services Stack (SISS). SISS is now deployed at all Australian Geological Surveys, many Universities and the CSIRO. Comprising a set of OGC catalogue and data services, and augmented with new vocabulary and identifier services, the SISS provides a comprehensive package for organisations to contribute their data to the AuScope network. This packaging and a variety of software testing and documentation activities enabled greater trust and notably reduced barriers to adoption. FOSS selection was important, not only for technical capability and robustness, but also for appropriate licensing and community models to ensure sustainability of the infrastructure in the long term. Government agencies were sensitive to these issues and Au

  2. Why Replacing Legacy Systems Is So Hard in Global Software Development: An Information Infrastructure Perspective

    DEFF Research Database (Denmark)

    Matthiesen, Stina; Bjørn, Pernille

    2015-01-01

    We report on an ethnographic study of an outsourcing global software development (GSD) setup between a Danish IT company and an Indian IT vendor developing a system to replace a legacy system for social services administration in Denmark. Physical distance and GSD collaboration issues tend...... to be obvious explanations for why GSD tasks fail to reach completion; however, we account for the difficulties within the technical nature of software system task. We use the framework of information infrastructure to show how replacing a legacy system in governmental information infrastructures includes...... the work of tracing back to knowledge concerning law, technical specifications, as well as how information infrastructures have dynamically evolved over time. Not easily carried out in a GSD setup is the work around technical tasks that requires careful examination of mundane technical aspects, standards...

  3. A Comparison of Satisfaction Ratings of School Psychologists in RTI versus Non-RTI School Districts

    Science.gov (United States)

    Bade-White, Priscilla A.

    2012-01-01

    Teachers' satisfaction with school psychological services has been studied for more than 30 years. Few to no studies, however, are available that provide data about the perceptions of school psychologists regarding their perceived value within different service delivery models, particularly those involving Response to Intervention (RTI) models.…

  4. A Survey of Software Infrastructures and Frameworks for Ubiquitous Computing

    Directory of Open Access Journals (Sweden)

    Christoph Endres

    2005-01-01

    Full Text Available In this survey, we discuss 29 software infrastructures and frameworks which support the construction of distributed interactive systems. They range from small projects with one implemented prototype to large scale research efforts, and they come from the fields of Augmented Reality (AR, Intelligent Environments, and Distributed Mobile Systems. In their own way, they can all be used to implement various aspects of the ubiquitous computing vision as described by Mark Weiser [60]. This survey is meant as a starting point for new projects, in order to choose an existing infrastructure for reuse, or to get an overview before designing a new one. It tries to provide a systematic, relatively broad (and necessarily not very deep overview, while pointing to relevant literature for in-depth study of the systems discussed.

  5. Perceptions of School Psychologists Regarding Barriers to Response to Intervention (RTI) Implementation

    Science.gov (United States)

    Marrs, Heath; Little, Suzanne

    2014-01-01

    As Response to Intervention (RTI) models continue to be implemented, an important research question is how school psychologists are experiencing the transition to RTI practice. In order to better understand the experiences of school psychologists, interviews with seven practicing school psychologists regarding their perceptions of barriers and…

  6. Software Infrastructure to Enable Modeling & Simulation as a Service (M&SaaS), Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — This SBIR Phase 2 project will produce a software service infrastructure that enables most modeling and simulation (M&S) activities from code development and...

  7. US NDC Modernization Iteration E2 Prototyping Report: OSD & PC Software Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Prescott, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marger, Bernard L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chiu, Ailsa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    During the second iteration of the US NDC Modernization Elaboration phase (E2), the SNL US NDC Modernization project team completed follow-on COTS surveys & exploratory prototyping related to the Object Storage & Distribution (OSD) mechanism, and the processing control software infrastructure. This report summarizes the E2 prototyping work.

  8. Investigating the Decision-Making of Response to Intervention (RtI) Teams within the School Setting

    Science.gov (United States)

    Thur, Scott M.

    2015-01-01

    The purpose of this study was to measure decision-making influences within RtI teams. The study examined the factors that influence school personnel involved in three areas of RtI: determining which RtI measures and tools teams select and implement (i.e. Measures and Tools), evaluating the data-driven decisions that are made based on the…

  9. IBEX: an open infrastructure software platform to facilitate collaborative work in radiomics.

    Science.gov (United States)

    Zhang, Lifei; Fried, David V; Fave, Xenia J; Hunter, Luke A; Yang, Jinzhong; Court, Laurence E

    2015-03-01

    Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (IBEX), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. The IBEX software package was developed using the MATLAB and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, IBEX is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, IBEX provides an integrated development environment on top of MATLAB and c/c++, so users are not limited to its built-in functions. In the IBEX developer studio, users can plug in, debug, and test new algorithms, extending IBEX's functionality. IBEX also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions

  10. Integrating Response to Intervention (RTI) with Neuropsychology: A Scientific Approach to Reading

    Science.gov (United States)

    Feifer, Steven G.

    2008-01-01

    This article integrates the fundamental components of both "Response to Intervention" (RTI) and cognitive neuropsychology when identifying reading disorders in children. Both proponents of RTI and cognitive neuropsychology agree the "discrepancy model" is not a reliable or valid method to identify learning disorders in school. In addition, both…

  11. HiCAT Software Infrastructure: Safe hardware control with object oriented Python

    Science.gov (United States)

    Moriarty, Christopher; Brooks, Keira; Soummer, Remi

    2018-01-01

    High contrast imaging for Complex Aperture Telescopes (HiCAT) is a testbed designed to demonstrate coronagraphy and wavefront control for segmented on-axis space telescopes such as envisioned for LUVOIR. To limit the air movements in the testbed room, software interfaces for several different hardware components were developed to completely automate operations. When developing software interfaces for many different pieces of hardware, unhandled errors are commonplace and can prevent the software from properly closing a hardware resource. Some fragile components (e.g. deformable mirrors) can be permanently damaged because of this. We present an object oriented Python-based infrastructure to safely automate hardware control and optical experiments. Specifically, conducting high-contrast imaging experiments while monitoring humidity and power status along with graceful shutdown processes even for unexpected errors. Python contains a construct called a “context manager” that allows you define code to run when a resource is opened or closed. Context managers ensure that a resource is properly closed, even when unhandled errors occur. Harnessing the context manager design, we also use Python’s multiprocessing library to monitor humidity and power status without interrupting the experiment. Upon detecting a safety problem, the master process sends an event to the child process that triggers the context managers to gracefully close any open resources. This infrastructure allows us to queue up several experiments and safely operate the testbed without a human in the loop.

  12. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.

    Science.gov (United States)

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen

    2013-03-01

    Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.

  13. IBEX: An open infrastructure software platform to facilitate collaborative work in radiomics

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lifei; Yang, Jinzhong [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Fried, David V.; Fave, Xenia J.; Hunter, Luke A.; Court, Laurence E., E-mail: LECourt@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas 77030 (United States)

    2015-03-15

    Purpose: Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (IBEX), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. Methods: The IBEX software package was developed using the MATLAB and C/C++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, IBEX is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, IBEX provides an integrated development environment on top of MATLAB and C/C++, so users are not limited to its built-in functions. In the IBEX developer studio, users can plug in, debug, and test new algorithms, extending IBEX’s functionality. IBEX also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between

  14. IBEX: An open infrastructure software platform to facilitate collaborative work in radiomics

    International Nuclear Information System (INIS)

    Zhang, Lifei; Yang, Jinzhong; Fried, David V.; Fave, Xenia J.; Hunter, Luke A.; Court, Laurence E.

    2015-01-01

    Purpose: Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (IBEX), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. Methods: The IBEX software package was developed using the MATLAB and C/C++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, IBEX is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, IBEX provides an integrated development environment on top of MATLAB and C/C++, so users are not limited to its built-in functions. In the IBEX developer studio, users can plug in, debug, and test new algorithms, extending IBEX’s functionality. IBEX also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between

  15. dSDiVN: a distributed Software-Defined Networking architecture for Infrastructure-less Vehicular Networks

    OpenAIRE

    Alioua, Ahmed; Senouci, Sidi-Mohammed; Moussaoui, Samira

    2017-01-01

    In the last few years, the emerging network architecture paradigm of Software-Defined Networking (SDN), has become one of the most important technology to manage large scale networks such as Vehicular Ad-hoc Networks (VANETs). Recently, several works have shown interest in the use of SDN paradigm in VANETs. SDN brings flexibility, scalability and management facility to current VANETs. However, almost all of proposed Software-Defined VANET (SDVN) architectures are infrastructure-based. This pa...

  16. The Program for Climate Model Diagnosis and Intercomparison (PCMDI) Software Development: Applications, Infrastructure, and Middleware/Networks

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-30

    The status of and future plans for the Program for Climate Model Diagnosis and Intercomparison (PCMDI) hinge on software that PCMDI is either currently distributing or plans to distribute to the climate community in the near future. These software products include standard conventions, national and international federated infrastructures, and community analysis and visualization tools. This report also mentions other secondary software not necessarily led by or developed at PCMDI to provide a complete picture of the overarching applications, infrastructures, and middleware/networks. Much of the software described anticipates the use of future technologies envisioned over the span of next year to 10 years. These technologies, together with the software, will be the catalyst required to address extreme-scale data warehousing, scalability issues, and service-level requirements for a diverse set of well-known projects essential for predicting climate change. These tools, unlike the previous static analysis tools of the past, will support the co-existence of many users in a productive, shared virtual environment. This advanced technological world driven by extreme-scale computing and the data it generates will increase scientists’ productivity, exploit national and international relationships, and push research to new levels of understanding.

  17. Indenopyride derivative RTI-4587-073(l): a candidate for male contraception in stallions.

    Science.gov (United States)

    Pozor, Malgorzata A; Macpherson, Margo L; McDonnell, Sue M; Nollin, Maggie; Roser, Janet F; Love, Charles; Runyon, Scott; Thomas, Brian F; Troedsson, Mats H

    2013-12-01

    The objective of this study was to determine whether an indenopyridine derivative RTI-4587-073(l) was a good candidate for male contraception in horses. We hypothesized that a single administration of RTI-4587-073(l) causes significant suppression of testicular function in stallions without affecting sexual behavior. Three Miniature horse stallions received a single dose of 12.5 mg/kg RTI-4587-073(l) orally (group "treated"), whereas three other Miniature horse stallions received placebo only (group "control"). Semen was collected and evaluated from all stallions twice a week for three baseline weeks and 13 post-treatment weeks. Sexual behavior was video-recorded and analyzed. Testicular dimensions were measured using ultrasonography, and blood samples were drawn for endocrine evaluation once before treatment and once a week during the post-treatment period. Single administration of RTI-4587-073(l) caused severe oligoasthenozoospermia (low sperm number and low motility), shedding large numbers of immature germ cells in semen, and increased FSH concentrations in treated stallions. These effects were fully reversible within ∼71 days. However, libido and copulatory behavior remained unchanged throughout the entire experiment. We concluded that RTI-4587-073(l) was a promising candidate for male contraceptive in domestic stallions. Further research should be performed to test this compound for fertility control in wildlife and humans. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Software Attribution for Geoscience Applications in the Computational Infrastructure for Geodynamics

    Science.gov (United States)

    Hwang, L.; Dumit, J.; Fish, A.; Soito, L.; Kellogg, L. H.; Smith, M.

    2015-12-01

    Scientific software is largely developed by individual scientists and represents a significant intellectual contribution to the field. As the scientific culture and funding agencies move towards an expectation that software be open-source, there is a corresponding need for mechanisms to cite software, both to provide credit and recognition to developers, and to aid in discoverability of software and scientific reproducibility. We assess the geodynamic modeling community's current citation practices by examining more than 300 predominantly self-reported publications utilizing scientific software in the past 5 years that is available through the Computational Infrastructure for Geodynamics (CIG). Preliminary results indicate that authors cite and attribute software either through citing (in rank order) peer-reviewed scientific publications, a user's manual, and/or a paper describing the software code. Attributions maybe found directly in the text, in acknowledgements, in figure captions, or in footnotes. What is considered citable varies widely. Citations predominantly lack software version numbers or persistent identifiers to find the software package. Versioning may be implied through reference to a versioned user manual. Authors sometimes report code features used and whether they have modified the code. As an open-source community, CIG requests that researchers contribute their modifications to the repository. However, such modifications may not be contributed back to a repository code branch, decreasing the chances of discoverability and reproducibility. Survey results through CIG's Software Attribution for Geoscience Applications (SAGA) project suggest that lack of knowledge, tools, and workflows to cite codes are barriers to effectively implement the emerging citation norms. Generated on-demand attributions on software landing pages and a prototype extensible plug-in to automatically generate attributions in codes are the first steps towards reproducibility.

  19. Particle Size Characterization of Water-Elutriated Libby Amphibole 2000 and RTI International Amosite

    Science.gov (United States)

    Lowers, Heather; Bern, Amy M.

    2009-01-01

    This report presents data on particle characterization analyzed by scanning electron microscopy on Libby amphibole collected by the U.S. Geological Survey in 2000 (LA2000) and amosite material collected by RTI International (RTI amosite). The particle characterization data were generated to support a portion of the Libby Action Plan. Prior to analysis, the raw LA2000 and RTI amosite materials were subjected to a preparation step. Each sample was water-elutriated by U.S. Environmental Protection Agency (USEPA) Office of Research and Development, Research Triangle Park using the methods generally described in another published report and then delivered to the U.S. Geological Survey, Denver Microbeam Laboratory for analysis. Data presented here represent analyses performed by the U.S. Geological Survey, Denver Microbeam Laboratory and USEPA National Enforcement Investigations Center. This report consists of two Excel spreadsheet files developed by USEPA, Region 8 Superfund Technical Assistance Unit and describe the particle size characterization of the LA2000 and RTI amosite, respectively. Multiple tabs and data entry cells exist in each spreadsheet and are defined herein.

  20. School Psychologists' Willingness to Implement RtI: The Role of Philosophical and Practical Readiness

    Science.gov (United States)

    Fan, Chung-Hau; Denner, Peter R.; Bocanegra, Joel O.; Ding, Yi

    2016-01-01

    After the change in IDEIA, different models of response to intervention (RtI) have been practiced widely in American school systems. School psychologists are in an important position to facilitate RtI practice and provide professional development in order to help their school systems successfully undergo this transformation. However, there is a…

  1. James Wallbank (Redundant Technology Initiative) (RTI) / James Wallbank ; interv. Tilman Baumgärtel

    Index Scriptorium Estoniae

    Wallbank, James

    2006-01-01

    James Wallbank (sünd. 1966) on RTI (Aegunud Tehnoloogia Initsiatiiv), mis hoiab alates 2000. aastast käigus Interneti-kohvikut "Access Space") Sheffieldis, rajaja. J. Wallbank 6. 10. 2000 tehtud intervjuus RTI-st, mis kasutab oma installatsioonides, skulptuurides ja teistes madaltehnoloogilistes teostes vanu arvuteid, installatsioonist näitusel "net_condition" Karlssruhe Kunsti- ja Meediatehnoloogia Keskuses (1999), tööst "Network Low Tech Video Wall" (2000) ja muust

  2. RTI: Court and Case Law--Confusion by Design

    Science.gov (United States)

    Daves, David P.; Walker, David W.

    2012-01-01

    Professional confusion, as well as case law confusion, exists concerning the fidelity and integrity of response to intervention (RTI) as a defensible procedure for identifying children as having a specific learning disability (SLD) under the Individuals with Disabilities Education Act (IDEA). Division is generated because of conflicting mandates…

  3. RTI and the Adolescent Reader: Responsive Literacy Instruction in Secondary Schools (Middle and High School). Language & Literacy Series Practitioners Bookshelf

    Science.gov (United States)

    Brozo, William G.

    2011-01-01

    "RTI and the Adolescent Reader" focuses exclusively on Response to Intervention (RTI) for literacy at the secondary level. In this accessible guide, William Brozo defines RTI and explains why and how it is considered a viable intervention model for adolescent readers. He analyzes the authentic structural, political, cultural, and teacher…

  4. School Psychologists' Stages of Concern with RTI Implementation

    Science.gov (United States)

    Bogue, Heidi; Marrs, Heath; Little, Suzanne

    2017-01-01

    Responsiveness to intervention has been an important change in models of service delivery within school systems in the recent past. However, there are a significant number of challenges to implementing the paradigm shift that these changes entail (Reschly 2008). Therefore, implementation of RTI varies among states, districts, and schools and some…

  5. Optimization of traffic distribution control in software-configurable infrastructure of virtual data center based on a simulation model

    Directory of Open Access Journals (Sweden)

    I. P. Bolodurina

    2017-01-01

    Full Text Available Currently, the proportion of use of cloud computing technology in today's business processes of companies is growing steadily. Despite the fact that it allows you to reduce the cost of ownership and operation of IT infrastructure, there are a number of problems related to the control of data centers. One such problem is the efficiency of the use of available companies compute and network resources. One of the directions of optimization is the process of traffic control of cloud applications and services in data centers. Given the multi-tier architecture of modern data center, this problem does not quite trivial. The advantage of modern virtual infrastructure is the ability to use software-configurable networks and software-configurable data storages. However, existing solutions with algorithmic optimization does not take into account a number of features forming network traffic with multiple classes of applications. Within the framework of the exploration solved the problem of optimizing the distribution of traffic cloud applications and services for the software-controlled virtual data center infrastructure. A simulation model describing the traffic in data center and software-configurable network segments involved in the processing of user requests for applications and services located network environment that includes a heterogeneous cloud platform and software-configurable data storages. The developed model has allowed to implement cloud applications traffic management algorithm and optimize access to the storage system through the effective use of the channel for data transmission. In experimental studies found that the application of the developed algorithm can reduce the response time of cloud applications and services, and as a result improve the performance of processing user requests and to reduce the number of failures.

  6. Quality assessment of observational studies in a drug-safety systematic review, comparison of two tools: the Newcastle–Ottawa Scale and the RTI item bank

    Directory of Open Access Journals (Sweden)

    Margulis AV

    2014-10-01

    Full Text Available Andrea V Margulis,1 Manel Pladevall,1 Nuria Riera-Guardia,1 Cristina Varas-Lorenzo,1 Lorna Hazell,2,3 Nancy D Berkman,4 Meera Viswanathan,4 Susana Perez-Gutthann,1 1RTI Health Solutions, Barcelona, Spain; 2Drug Safety Research Unit, Southampton, UK; 3Associate Department of the School of Pharmacy and Biomedical Sciences, University of Portsmouth, Portsmouth, UK; 4RTI International, Research Triangle Park, NC, USA Background: The study objective was to compare the Newcastle–Ottawa Scale (NOS and the RTI item bank (RTI-IB and estimate interrater agreement using the RTI-IB within a systematic review on the cardiovascular safety of glucose-lowering drugs. Methods: We tailored both tools and added four questions to the RTI-IB. Two reviewers assessed the quality of the 44 included studies with both tools, (independently for the RTI-IB and agreed on which responses conveyed low, unclear, or high risk of bias. For each question in the RTI-IB (n=31, the observed interrater agreement was calculated as the percentage of studies given the same bias assessment by both reviewers; chance-adjusted interrater agreement was estimated with the first-order agreement coefficient (AC1 statistic. Results: The NOS required less tailoring and was easier to use than the RTI-IB, but the RTI-IB produced a more thorough assessment. The RTI-IB includes most of the domains measured in the NOS. Median observed interrater agreement for the RTI-IB was 75% (25th percentile [p25] =61%; p75 =89%; median AC1 statistic was 0.64 (p25 =0.51; p75 =0.86. Conclusion: The RTI-IB facilitates a more complete quality assessment than the NOS but is more burdensome. The observed agreement and AC1 statistic in this study were higher than those reported by the RTI-IB's developers. Keywords: systematic review, meta-analysis, quality assessment, AC1

  7. RTI Strategies That Work in the K-2 Classroom

    Science.gov (United States)

    Johnson, Eli; Karns, Michelle

    2011-01-01

    Targeted specifically to K-2 classrooms, the 25 Response-to-Intervention (RTI) strategies in this book are research-based and perfect for teachers who want to expand their toolbox of classroom interventions that work! Contents include: (1) Listening Strategies--Help students focus and understand; (2) Reading Strategies--Help students comprehend…

  8. RTI Confusion in the Case Law and the Legal Commentary

    Science.gov (United States)

    Zirkel, Perry A.

    2011-01-01

    This article expresses the position that the current legal commentary and cases do not sufficiently differentiate response to intervention (RTI) from the various forms of general education interventions that preceded it, thus compounding confusion in professional practice as to legally defensible procedures for identifying children as having a…

  9. Critical Practice Analysis of Special Education Policy: An RTI Example

    Science.gov (United States)

    Thorius, Kathleen A. King; Maxcy, Brendan D.

    2015-01-01

    Since 1997, revisions to the Individuals With Disabilities Education Act (IDEA) have shown promise for addressing special education equity concerns: For example, states have the option to use response to intervention (RTI) for determining and thus reducing inappropriate disability determination, and states and districts are required to assess and…

  10. LCG/AA build infrastructure

    International Nuclear Information System (INIS)

    Hodgkins, Alex Liam; Diez, Victor; Hegner, Benedikt

    2012-01-01

    The Software Process and Infrastructure (SPI) project provides a build infrastructure for regular integration testing and release of the LCG Applications Area software stack. In the past, regular builds have been provided using a system which has been constantly growing to include more features like server-client communication, long-term build history and a summary web interface using present-day web technologies. However, the ad-hoc style of software development resulted in a setup that is hard to monitor, inflexible and difficult to expand. The new version of the infrastructure is based on the Django Python framework, which allows for a structured and modular design, facilitating later additions. Transparency in the workflows and ease of monitoring has been one of the priorities in the design. Formerly missing functionality like on-demand builds or release triggering will support the transition to a more agile development process.

  11. Software Infrastructure for Computer-aided Drug Discovery and Development, a Practical Example with Guidelines.

    Science.gov (United States)

    Moretti, Loris; Sartori, Luca

    2016-09-01

    In the field of Computer-Aided Drug Discovery and Development (CADDD) the proper software infrastructure is essential for everyday investigations. The creation of such an environment should be carefully planned and implemented with certain features in order to be productive and efficient. Here we describe a solution to integrate standard computational services into a functional unit that empowers modelling applications for drug discovery. This system allows users with various level of expertise to run in silico experiments automatically and without the burden of file formatting for different software, managing the actual computation, keeping track of the activities and graphical rendering of the structural outcomes. To showcase the potential of this approach, performances of five different docking programs on an Hiv-1 protease test set are presented. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Relations between the CCSS and RTI in Literacy and Language

    Science.gov (United States)

    Wixson, Karen K.; Lipson, Marjorie Y.

    2012-01-01

    Initiatives such as Response to Intervention (RTI) and the Common Core State Standards for English Language Arts (CCSS-ELA) have the potential to positively impact progress toward the goal of literacy for all. Because the CCSS-ELA will guide the content of the curriculum, instruction and assessment in the large number of adopting states, they will…

  13. The Computational Infrastructure for Geodynamics: An Example of Software Curation and Citation in the Geodynamics Community

    Science.gov (United States)

    Hwang, L.; Kellogg, L. H.

    2017-12-01

    Curation of software promotes discoverability and accessibility and works hand in hand with scholarly citation to ascribe value to, and provide recognition for software development. To meet this challenge, the Computational Infrastructure for Geodynamics (CIG) maintains a community repository built on custom and open tools to promote discovery, access, identification, credit, and provenance of research software for the geodynamics community. CIG (geodynamics.org) originated from recognition of the tremendous effort required to develop sound software and the need to reduce duplication of effort and to sustain community codes. CIG curates software across 6 domains and has developed and follows software best practices that include establishing test cases, documentation, and a citable publication for each software package. CIG software landing web pages provide access to current and past releases; many are also accessible through the CIG community repository on github. CIG has now developed abc - attribution builder for citation to enable software users to give credit to software developers. abc uses zenodo as an archive and as the mechanism to obtain a unique identifier (DOI) for scientific software. To assemble the metadata, we searched the software's documentation and research publications and then requested the primary developers to verify. In this process, we have learned that each development community approaches software attribution differently. The metadata gathered is based on guidelines established by groups such as FORCE11 and OntoSoft. The rollout of abc is gradual as developers are forward-looking, rarely willing to go back and archive prior releases in zenodo. Going forward all actively developed packages will utilize the zenodo and github integration to automate the archival process when a new release is issued. How to handle legacy software, multi-authored libraries, and assigning roles to software remain open issues.

  14. Documentation of Infrastructure

    DEFF Research Database (Denmark)

    Workspace

    2003-01-01

    This report describes the software infrastructure developed within the WorkSPACE  project, both from a software architectural point of view and from a user point of  view. We first give an overview of the system architecture, then go on to present the  more prominent features of the 3D graphical...

  15. Rise of the build infrastructure

    International Nuclear Information System (INIS)

    Eulisse, Giulio; Muzaffar, Shahzad; Abdurachmanov, David; Mendez, David

    2014-01-01

    CMS Offline Software, CMSSW, is an extremely large software project, with roughly 3 millions lines of code, two hundreds of active developers and two to three active development branches. Given the scale of the problem, both from a technical and a human point of view, being able to keep on track such a large project, bug free, and to deliver builds for different architectures is a challenge in itself. Moreover the challenges posed by the future migration of CMSSW to multithreading also require adapting and improving our QA tools. We present the work done in the last two years in our build and integration infrastructure, particularly in the form of improvements to our build tools, in the simplification and extensibility of our build infrastructure and the new features added to our QA and profiling tools. Finally we present our plans for the future directions for code management and how this reflects on our workflows and the underlying software infrastructure.

  16. MENSTRUAL HYGIENE PRACTICES AND RTI AMONG EVER-MARRIED WOMEN IN RURAL SLUM

    Directory of Open Access Journals (Sweden)

    Sadhana Singh

    2011-06-01

    Full Text Available Background:Considering huge burden of RTI across community based study settings- either iatrogenic or endogenous and not necessarily sexually transmitted, menstrual hygiene practices by reproductive age group women have documented evidence of being a key determinant/ predictor of RTI and bear causal association with key Socio-demographic attributes. This is more so in view of vulnerability to health risk, access to treatment and reduced economical choice for a marginal & disadvantaged population like the ‘in-migrants/itinerants. Objectives: 1. To study menstrual hygiene practices of ever-married ‘in-migrant’ women from Dehradun as a key determinant of reproductive health needs. 2. To establish causal association between menstrual hygiene practices and (i key socio-demographic attributes & (ii RTI. Methodology: An observational (cross-sectional study was designed with a probability sample from 5033 ever-married women from 06 ‘make-shift settlements’/slums along immediate precincts i.e 50 meters into the mainland from the banks of rivers ‘Chandrabhaga’, ‘Ganga’, ‘Song’ and ‘Rispana’- all in the district of Dehradun. Result& Conclusion: The present study findings revealed that as key determinant of reproductive health needs, menstrual hygiene practices of the study population bore significant statistical association with their (i literacy status or education (ii religion (iii key reproductive tract infection symptoms and (iv socio-economic status. The findings reinforced the felt need to address knowledge, attitude and practices of the disadvantaged study population by appropriate behaviour change communication, build community & provider capacity and strategies to deliver services at such resource - poor setting keeping in view the four A’s of primary health care.

  17. Contextual-Analysis for Infrastructure Awareness Systems

    DEFF Research Database (Denmark)

    Ramos, Juan David Hincapie; Tabard, Aurelien; Alt, Florian

    Infrastructures are persistent socio-technical systems used to deliver different kinds of services. Researchers have looked into how awareness of infrastructures in the areas of sustainability [6, 10] and software appropriation [11] can be provided. However, designing infrastructure-aware systems...... has specific requirements, which are often ignored. In this paper we explore the challenges when developing infrastructure awareness systems based on contextual analysis, and propose guidelines for enhancing the design process....

  18. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    Science.gov (United States)

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  19. A cyber infrastructure for the SKA Telescope Manager

    Science.gov (United States)

    Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul

    2016-07-01

    The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.

  20. Computational Infrastructure for Nuclear Astrophysics

    International Nuclear Information System (INIS)

    Smith, Michael S.; Hix, W. Raphael; Bardayan, Daniel W.; Blackmon, Jeffery C.; Lingerfelt, Eric J.; Scott, Jason P.; Nesaraja, Caroline D.; Chae, Kyungyuk; Guidry, Michael W.; Koura, Hiroyuki; Meyer, Richard A.

    2006-01-01

    A Computational Infrastructure for Nuclear Astrophysics has been developed to streamline the inclusion of the latest nuclear physics data in astrophysics simulations. The infrastructure consists of a platform-independent suite of computer codes that is freely available online at nucastrodata.org. Features of, and future plans for, this software suite are given

  1. Conceptualizing RTI in 21st-Century Secondary Science Classrooms: Video Games' Potential to Provide Tiered Support and Progress Monitoring for Students with Learning Disabilities

    Science.gov (United States)

    Marino, Matthew T.; Beecher, Constance C.

    2010-01-01

    Secondary schools across the United States are adopting response to intervention (RTI) as a means to identify students with learning disabilities (LD) and provide tiered instructional interventions that benefit all students. The majority of current RTI research focuses on students with reading difficulties in elementary school classrooms.…

  2. The ATLAS Simulation Infrastructure

    CERN Document Server

    Aad, G.; Abdallah, J.; Abdelalim, A.A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Abreu, H.; Acharya, B.S.; Adams, D.L.; Addy, T.N.; Adelman, J.; Adorisio, C.; Adragna, P.; Adye, T.; Aefsky, S.; Aguilar-Saavedra, J.A.; Aharrouche, M.; Ahlen, S.P.; Ahles, F.; Ahmad, A.; Ahmed, H.; Ahsan, M.; Aielli, G.; Akdogan, T.; Akesson, T.P.A.; Akimoto, G.; Akimov, A.V.; Aktas, A.; Alam, M.S.; Alam, M.A.; Albrand, S.; Aleksa, M.; Aleksandrov, I.N.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alison, J.; Aliyev, M.; Allport, P.P.; Allwood-Spiers, S.E.; Almond, J.; Aloisio, A.; Alon, R.; Alonso, A.; Alviggi, M.G.; Amako, K.; Amelung, C.; Amorim, A.; Amoros, G.; Amram, N.; Anastopoulos, C.; Andeen, T.; Anders, C.F.; Anderson, K.J.; Andreazza, A.; Andrei, V.; Anduaga, X.S.; Angerami, A.; Anghinolfi, F.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonelli, S.; Antos, J.; Antunovic, B.; Anulli, F.; Aoun, S.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A.T.H.; Archambault, J.P.; Arfaoui, S.; Arguin, J-F.; Argyropoulos, T.; Arik, M.; Armbruster, A.J.; Arnaez, O.; Arnault, C.; Artamonov, A.; Arutinov, D.; Asai, M.; Asai, S.; Asfandiyarov, R.; Ask, S.; Asman, B.; Asner, D.; Asquith, L.; Assamagan, K.; Astbury, A.; Astvatsatourov, A.; Atoian, G.; Auerbach, B.; Augsten, K.; Aurousseau, M.; Austin, N.; Avolio, G.; Avramidou, R.; Axen, D.; Ay, C.; Azuelos, G.; Azuma, Y.; Baak, M.A.; Bach, A.M.; Bachacou, H.; Bachas, K.; Backes, M.; Badescu, E.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J.T.; Baker, O.K.; Baker, M.D.; Baker, S; Baltasar Dos Santos Pedrosa, F.; Banas, E.; Banerjee, P.; Banerjee, S.; Banfi, D.; Bangert, A.; Bansal, V.; Baranov, S.P.; Baranov, S.; Barashkou, A.; Barber, T.; Barberio, E.L.; Barberis, D.; Barbero, M.; Bardin, D.Y.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B.M.; Barnett, R.M.; Baroncelli, A.; Barr, A.J.; Barreiro, F.; Barreiro Guimaraes da Costa, J.; Barrillon, P.; Bartoldus, R.; Bartsch, D.; Bates, R.L.; Batkova, L.; Batley, J.R.; Battaglia, A.; Battistin, M.; Bauer, F.; Bawa, H.S.; Bazalova, M.; Beare, B.; Beau, T.; Beauchemin, P.H.; Beccherle, R.; Becerici, N.; Bechtle, P.; Beck, G.A.; Beck, H.P.; Beckingham, M.; Becks, K.H.; Beddall, A.J.; Beddall, A.; Bednyakov, V.A.; Bee, C.; Begel, M.; Behar Harpaz, S.; Behera, P.K.; Beimforde, M.; Belanger-Champagne, C.; Bell, P.J.; Bell, W.H.; Bella, G.; Bellagamba, L.; Bellina, F.; Bellomo, M.; Belloni, A.; Belotskiy, K.; Beltramello, O.; Ben Ami, S.; Benary, O.; Benchekroun, D.; Bendel, M.; Benedict, B.H.; Benekos, N.; Benhammou, Y.; Benincasa, G.P.; Benjamin, D.P.; Benoit, M.; Bensinger, J.R.; Benslama, K.; Bentvelsen, S.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Berglund, E.; Beringer, J.; Bernat, P.; Bernhard, R.; Bernius, C.; Berry, T.; Bertin, A.; Besana, M.I.; Besson, N.; Bethke, S.; Bianchi, R.M.; Bianco, M.; Biebel, O.; Biesiada, J.; Biglietti, M.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biscarat, C.; Bitenc, U.; Black, K.M.; Blair, R.E.; Blanchard, J-B; Blanchot, G.; Blocker, C.; Blondel, A.; Blum, W.; Blumenschein, U.; Bobbink, G.J.; Bocci, A.; Boehler, M.; Boek, J.; Boelaert, N.; Boser, S.; Bogaerts, J.A.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Bondarenko, V.G.; Bondioli, M.; Boonekamp, M.; Bordoni, S.; Borer, C.; Borisov, A.; Borissov, G.; Borjanovic, I.; Borroni, S.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Bouchami, J.; Boudreau, J.; Bouhova-Thacker, E.V.; Boulahouache, C.; Bourdarios, C.; Boveia, A.; Boyd, J.; Boyko, I.R.; Bozovic-Jelisavcic, I.; Bracinik, J.; Braem, A.; Branchini, P.; Brandenburg, G.W.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J.E.; Braun, H.M.; Brelier, B.; Bremer, J.; Brenner, R.; Bressler, S.; Britton, D.; Brochu, F.M.; Brock, I.; Brock, R.; Brodet, E.; Bromberg, C.; Brooijmans, G.; Brooks, W.K.; Brown, G.; Bruckman de Renstrom, P.A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bucci, F.; Buchanan, J.; Buchholz, P.; Buckley, A.G.; Budagov, I.A.; Budick, B.; Buscher, V.; Bugge, L.; Bulekov, O.; Bunse, M.; Buran, T.; Burckhart, H.; Burdin, S.; Burgess, T.; Burke, S.; Busato, E.; Bussey, P.; Buszello, C.P.; Butin, F.; Butler, B.; Butler, J.M.; Buttar, C.M.; Butterworth, J.M.; Byatt, T.; Caballero, J.; Cabrera Urban, S.; Caforio, D.; Cakir, O.; Calafiura, P.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L.P.; Calvet, D.; Camarri, P.; Cameron, D.; Campana, S.; Campanelli, M.; Canale, V.; Canelli, F.; Canepa, A.; Cantero, J.; Capasso, L.; Capeans Garrido, M.D.M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Caramarcu, C.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, B.; Caron, S.; Carrillo Montoya, G.D.; Carron Montero, S.; Carter, A.A.; Carter, J.R.; Carvalho, J.; Casadei, D.; Casado, M.P.; Cascella, M.; Castaneda Hernandez, A.M.; Castaneda-Miranda, E.; Castillo Gimenez, V.; Castro, N.F.; Cataldi, G.; Catinaccio, A.; Catmore, J.R.; Cattai, A.; Cattani, G.; Caughron, S.; Cauz, D.; Cavalleri, P.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerqueira, A.S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cetin, S.A.; Chafaq, A.; Chakraborty, D.; Chan, K.; Chapman, J.D.; Chapman, J.W.; Chareyre, E.; Charlton, D.G.; Chavda, V.; Cheatham, S.; Chekanov, S.; Chekulaev, S.V.; Chelkov, G.A.; Chen, H.; Chen, S.; Chen, X.; Cheplakov, A.; Chepurnov, V.F.; Cherkaoui El Moursli, R.; Tcherniatine, V.; Chesneanu, D.; Cheu, E.; Cheung, S.L.; Chevalier, L.; Chevallier, F.; Chiarella, V.; Chiefari, G.; Chikovani, L.; Childers, J.T.; Chilingarov, A.; Chiodini, G.; Chizhov, V.; Choudalakis, G.; Chouridou, S.; Christidi, I.A.; Christov, A.; Chromek-Burckhart, D.; Chu, M.L.; Chudoba, J.; Ciapetti, G.; Ciftci, A.K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciobotaru, M.D.; Ciocca, C.; Ciocio, A.; Cirilli, M.; Citterio, M.; Clark, A.; Clark, P.J.; Cleland, W.; Clemens, J.C.; Clement, B.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coggeshall, J.; Cogneras, E.; Colijn, A.P.; Collard, C.; Collins, N.J.; Collins-Tooth, C.; Collot, J.; Colon, G.; Conde Muino, P.; Coniavitis, E.; Consonni, M.; Constantinescu, S.; Conta, C.; Conventi, F.; Cooke, M.; Cooper, B.D.; Cooper-Sarkar, A.M.; Cooper-Smith, N.J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M.J.; Costanzo, D.; Costin, T.; Cote, D.; Coura Torres, R.; Courneyea, L.; Cowan, G.; Cowden, C.; Cox, B.E.; Cranmer, K.; Cranshaw, J.; Cristinziani, M.; Crosetti, G.; Crupi, R.; Crepe-Renaudin, S.; Cuenca Almenar, C.; Cuhadar Donszelmann, T.; Curatolo, M.; Curtis, C.J.; Cwetanski, P.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; D'Orazio, A.; Da Via, C; Dabrowski, W.; Dai, T.; Dallapiccola, C.; Dallison, S.J.; Daly, C.H.; Dam, M.; Danielsson, H.O.; Dannheim, D.; Dao, V.; Darbo, G.; Darlea, G.L.; Davey, W.; Davidek, T.; Davidson, N.; Davidson, R.; Davies, M.; Davison, A.R.; Dawson, I.; Daya, R.K.; De, K.; de Asmundis, R.; De Castro, S.; De Castro Faria Salgado, P.E.; De Cecco, S.; de Graat, J.; De Groot, N.; de Jong, P.; De Mora, L.; De Oliveira Branco, M.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J.B.; De Zorzi, G.; Dean, S.; Dedovich, D.V.; Degenhardt, J.; Dehchar, M.; Del Papa, C.; Del Peso, J.; Del Prete, T.; Dell'Acqua, A.; Dell'Asta, L.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P.A.; Deluca, C.; Demers, S.; Demichev, M.; Demirkoz, B.; Deng, J.; Deng, W.; Denisov, S.P.; Derkaoui, J.E.; Derue, F.; Dervan, P.; Desch, K.; Deviveiros, P.O.; Dewhurst, A.; DeWilde, B.; Dhaliwal, S.; Dhullipudi, R.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Girolamo, A.; Di Girolamo, B.; Di Luise, S.; Di Mattia, A.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Diaz, M.A.; Diblen, F.; Diehl, E.B.; Dietrich, J.; Dietzsch, T.A.; Diglio, S.; Dindar Yagci, K.; Dingfelder, J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djilkibaev, R.; Djobava, T.; do Vale, M.A.B.; Do Valle Wemans, A.; Doan, T.K.O.; Dobos, D.; Dobson, E.; Dobson, M.; Doglioni, C.; Doherty, T.; Dolejsi, J.; Dolenc, I.; Dolezal, Z.; Dolgoshein, B.A.; Dohmae, T.; Donega, M.; Donini, J.; Dopke, J.; Doria, A.; Dos Anjos, A.; Dotti, A.; Dova, M.T.; Doxiadis, A.; Doyle, A.T.; Drasal, Z.; Dris, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Dudarev, A.; Dudziak, F.; Duhrssen, M.; Duflot, L.; Dufour, M-A.; Dunford, M.; Duran Yildiz, H.; Dushkin, A.; Duxfield, R.; Dwuznik, M.; Duren, M.; Ebenstein, W.L.; Ebke, J.; Eckweiler, S.; Edmonds, K.; Edwards, C.A.; Egorov, K.; Ehrenfeld, W.; Ehrich, T.; Eifert, T.; Eigen, G.; Einsweiler, K.; Eisenhandler, E.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, K.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Engelmann, R.; Engl, A.; Epp, B.; Eppig, A.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ermoline, I.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Escobar, C.; Espinal Curull, X.; Esposito, B.; Etienvre, A.I.; Etzion, E.; Evans, H.; Fabbri, L.; Fabre, C.; Facius, K.; Fakhrutdinov, R.M.; Falciano, S.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farley, J.; Farooque, T.; Farrington, S.M.; Farthouat, P.; Fassnacht, P.; Fassouliotis, D.; Fatholahzadeh, B.; Fayard, L.; Fayette, F.; Febbraro, R.; Federic, P.; Fedin, O.L.; Fedorko, W.; Feligioni, L.; Felzmann, C.U.; Feng, C.; Feng, E.J.; Fenyuk, A.B.; Ferencei, J.; Ferland, J.; Fernandes, B.; Fernando, W.; Ferrag, S.; Ferrando, J.; Ferrara, V.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferrer, A.; Ferrer, M.L.; Ferrere, D.; Ferretti, C.; Fiascaris, M.; Fiedler, F.; Filipcic, A.; Filippas, A.; Filthaut, F.; Fincke-Keeler, M.; Fiolhais, M.C.N.; Fiorini, L.; Firan, A.; Fischer, G.; Fisher, M.J.; Flechl, M.; Fleck, I.; Fleckner, J.; Fleischmann, P.; Fleischmann, S.; Flick, T.; Flores Castillo, L.R.; Flowerdew, M.J.; Fonseca Martin, T.; Formica, A.; Forti, A.; Fortin, D.; Fournier, D.; Fowler, A.J.; Fowler, K.; Fox, H.; Francavilla, P.; Franchino, S.; Francis, D.; Franklin, M.; Franz, S.; Fraternali, M.; Fratina, S.; Freestone, J.; French, S.T.; Froeschl, R.; Froidevaux, D.; Frost, J.A.; Fukunaga, C.; Fullana Torregrosa, E.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gadfort, T.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Gallas, E.J.; Gallo, V.; Gallop, B.J.; Gallus, P.; Galyaev, E.; Gan, K.K.; Gao, Y.S.; Gaponenko, A.; Garcia-Sciveres, M.; Garcia, C.; Garcia Navarro, J.E.; Gardner, R.W.; Garelli, N.; Garitaonandia, H.; Garonne, V.; Gatti, C.; Gaudio, G.; Gautard, V.; Gauzzi, P.; Gavrilenko, I.L.; Gay, C.; Gaycken, G.; Gazis, E.N.; Ge, P.; Gee, C.N.P.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Genest, M.H.; Gentile, S.; Georgatos, F.; George, S.; Gershon, A.; Ghazlane, H.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giakoumopoulou, V.; Giangiobbe, V.; Gianotti, F.; Gibbard, B.; Gibson, A.; Gibson, S.M.; Gilbert, L.M.; Gilchriese, M.; Gilewsky, V.; Gingrich, D.M.; Ginzburg, J.; Giokaris, N.; Giordani, M.P.; Giordano, R.; Giorgi, F.M.; Giovannini, P.; Giraud, P.F.; Girtler, P.; Giugni, D.; Giusti, P.; Gjelsten, B.K.; Gladilin, L.K.; Glasman, C.; Glazov, A.; Glitza, K.W.; Glonti, G.L.; Godfrey, J.; Godlewski, J.; Goebel, M.; Gopfert, T.; Goeringer, C.; Gossling, C.; Gottfert, T.; Goggi, V.; Goldfarb, S.; Goldin, D.; Golling, T.; Gomes, A.; Gomez Fajardo, L.S.; Goncalo, R.; Gonella, L.; Gong, C.; Gonzalez de la Hoz, S.; Gonzalez Silva, M.L.; Gonzalez-Sevilla, S.; Goodson, J.J.; Goossens, L.; Gordon, H.A.; Gorelov, I.; Gorfine, G.; Gorini, B.; Gorini, E.; Gorisek, A.; Gornicki, E.; Gosdzik, B.; Gosselink, M.; Gostkin, M.I.; Gough Eschrich, I.; Gouighri, M.; Goujdami, D.; Goulette, M.P.; Goussiou, A.G.; Goy, C.; Grabowska-Bold, I.; Grafstrom, P.; Grahn, K-J.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Grau, N.; Gray, H.M.; Gray, J.A.; Graziani, E.; Green, B.; Greenshaw, T.; Greenwood, Z.D.; Gregor, I.M.; Grenier, P.; Griesmayer, E.; Griffiths, J.; Grigalashvili, N.; Grillo, A.A.; Grimm, K.; Grinstein, S.; Grishkevich, Y.V.; Groh, M.; Groll, M.; Gross, E.; Grosse-Knetter, J.; Groth-Jensen, J.; Grybel, K.; Guicheney, C.; Guida, A.; Guillemin, T.; Guler, H.; Gunther, J.; Guo, B.; Gupta, A.; Gusakov, Y.; Gutierrez, A.; Gutierrez, P.; Guttman, N.; Gutzwiller, O.; Guyot, C.; Gwenlan, C.; Gwilliam, C.B.; Haas, A.; Haas, S.; Haber, C.; Hadavand, H.K.; Hadley, D.R.; Haefner, P.; Hartel, R.; Hajduk, Z.; Hakobyan, H.; Haller, J.; Hamacher, K.; Hamilton, A.; Hamilton, S.; Han, L.; Hanagaki, K.; Hance, M.; Handel, C.; Hanke, P.; Hansen, J.R.; Hansen, J.B.; Hansen, J.D.; Hansen, P.H.; Hansl-Kozanecka, T.; Hansson, P.; Hara, K.; Hare, G.A.; Harenberg, T.; Harrington, R.D.; Harris, O.M.; Harrison, K; Hartert, J.; Hartjes, F.; Harvey, A.; Hasegawa, S.; Hasegawa, Y.; Hashemi, K.; Hassani, S.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawkes, C.M.; Hawkings, R.J.; Hayakawa, T.; Hayward, H.S.; Haywood, S.J.; Head, S.J.; Hedberg, V.; Heelan, L.; Heim, S.; Heinemann, B.; Heisterkamp, S.; Helary, L.; Heller, M.; Hellman, S.; Helsens, C.; Hemperek, T.; Henderson, R.C.W.; Henke, M.; Henrichs, A.; Henriques Correia, A.M.; Henrot-Versille, S.; Hensel, C.; Henss, T.; Hernandez Jimenez, Y.; Hershenhorn, A.D.; Herten, G.; Hertenberger, R.; Hervas, L.; Hessey, N.P.; Higon-Rodriguez, E.; Hill, J.C.; Hiller, K.H.; Hillert, S.; Hillier, S.J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirsch, F.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M.C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M.R.; Hoffman, J.; Hoffmann, D.; Hohlfeld, M.; Holy, T.; Holzbauer, J.L.; Homma, Y.; Horazdovsky, T.; Hori, T.; Horn, C.; Horner, S.; Horvat, S.; Hostachy, J-Y.; Hou, S.; Hoummada, A.; Howe, T.; Hrivnac, J.; Hryn'ova, T.; Hsu, P.J.; Hsu, S.C.; Huang, G.S.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Hughes, E.W.; Hughes, G.; Hurwitz, M.; Husemann, U.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idarraga, J.; Iengo, P.; Igonkina, O.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ince, T.; Ioannou, P.; Iodice, M.; Irles Quiles, A.; Ishikawa, A.; Ishino, M.; Ishmukhametov, R.; Isobe, T.; Issakov, V.; Issever, C.; Istin, S.; Itoh, Y.; Ivashin, A.V.; Iwanski, W.; Iwasaki, H.; Izen, J.M.; Izzo, V.; Jackson, B.; Jackson, J.N.; Jackson, P.; Jaekel, M.R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakubek, J.; Jana, D.K.; Jansen, E.; Jantsch, A.; Janus, M.; Jared, R.C.; Jarlskog, G.; Jeanty, L.; Jen-La Plante, I.; Jenni, P.; Jez, P.; Jezequel, S.; Ji, W.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, S.; Jinnouchi, O.; Joffe, D.; Johansen, M.; Johansson, K.E.; Johansson, P.; Johnert, S; Johns, K.A.; Jon-And, K.; Jones, G.; Jones, R.W.L.; Jones, T.J.; Jorge, P.M.; Joseph, J.; Juranek, V.; Jussel, P.; Kabachenko, V.V.; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kaiser, S.; Kajomovitz, E.; Kalinin, S.; Kalinovskaya, L.V.; Kalinowski, A.; Kama, S.; Kanaya, N.; Kaneda, M.; Kantserov, V.A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kaplon, J.; Kar, D.; Karagounis, M.; Karagoz Unel, M.; Kartvelishvili, V.; Karyukhin, A.N.; Kashif, L.; Kasmi, A.; Kass, R.D.; Kastanas, A.; Kastoryano, M.; Kataoka, M.; Kataoka, Y.; Katsoufis, E.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kayl, M.S.; Kayumov, F.; Kazanin, V.A.; Kazarinov, M.Y.; Keates, J.R.; Keeler, R.; Keener, P.T.; Kehoe, R.; Keil, M.; Kekelidze, G.D.; Kelly, M.; Kenyon, M.; Kepka, O.; Kerschen, N.; Kersevan, B.P.; Kersten, S.; Kessoku, K.; Khakzad, M.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Kharchenko, D.; Khodinov, A.; Khomich, A.; Khoriauli, G.; Khovanskiy, N.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kim, H.; Kim, M.S.; Kim, P.C.; Kim, S.H.; Kind, O.; Kind, P.; King, B.T.; Kirk, J.; Kirsch, G.P.; Kirsch, L.E.; Kiryunin, A.E.; Kisielewska, D.; Kittelmann, T.; Kiyamura, H.; Kladiva, E.; Klein, M.; Klein, U.; Kleinknecht, K.; Klemetti, M.; Klier, A.; Klimentov, A.; Klingenberg, R.; Klinkby, E.B.; Klioutchnikova, T.; Klok, P.F.; Klous, S.; Kluge, E.E.; Kluge, T.; Kluit, P.; Klute, M.; Kluth, S.; Knecht, N.S.; Kneringer, E.; Ko, B.R.; Kobayashi, T.; Kobel, M.; Koblitz, B.; Kocian, M.; Kocnar, A.; Kodys, P.; Koneke, K.; Konig, A.C.; Koenig, S.; Kopke, L.; Koetsveld, F.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kohn, F.; Kohout, Z.; Kohriki, T.; Kolanoski, H.; Kolesnikov, V.; Koletsou, I.; Koll, J.; Kollar, D.; Kolos, S.; Kolya, S.D.; Komar, A.A.; Komaragiri, J.R.; Kondo, T.; Kono, T.; Konoplich, R.; Konovalov, S.P.; Konstantinidis, N.; Koperny, S.; Korcyl, K.; Kordas, K.; Korn, A.; Korolkov, I.; Korolkova, E.V.; Korotkov, V.A.; Kortner, O.; Kostka, P.; Kostyukhin, V.V.; Kotov, S.; Kotov, V.M.; Kotov, K.Y.; Kourkoumelis, C.; Koutsman, A.; Kowalewski, R.; Kowalski, H.; Kowalski, T.Z.; Kozanecki, W.; Kozhin, A.S.; Kral, V.; Kramarenko, V.A.; Kramberger, G.; Krasny, M.W.; Krasznahorkay, A.; Kreisel, A.; Krejci, F.; Kretzschmar, J.; Krieger, N.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Kruger, H.; Krumshteyn, Z.V.; Kubota, T.; Kuehn, S.; Kugel, A.; Kuhl, T.; Kuhn, D.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kummer, C.; Kuna, M.; Kunkle, J.; Kupco, A.; Kurashige, H.; Kurata, M.; Kurchaninov, L.L.; Kurochkin, Y.A.; Kus, V.; Kwee, R.; La Rotonda, L.; Labbe, J.; Lacasta, C.; Lacava, F.; Lacker, H.; Lacour, D.; Lacuesta, V.R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lamanna, M.; Lampen, C.L.; Lampl, W.; Lancon, E.; Landgraf, U.; Landon, M.P.J.; Lane, J.L.; Lankford, A.J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J.F.; Lari, T.; Larner, A.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Laycock, P.; Lazarev, A.B.; Lazzaro, A.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; Le Vine, M.; Lebedev, A.; Lebel, C.; LeCompte, T.; Ledroit-Guillon, F.; Lee, H.; Lee, J.S.H.; Lee, S.C.; Lefebvre, M.; Legendre, M.; LeGeyt, B.C.; Legger, F.; Leggett, C.; Lehmacher, M.; Lehmann Miotto, G.; Lei, X.; Leitner, R.; Lellouch, D.; Lellouch, J.; Lendermann, V.; Leney, K.J.C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leonhardt, K.; Leroy, C.; Lessard, J-R.; Lester, C.G.; Leung Fook Cheong, A.; Leveque, J.; Levin, D.; Levinson, L.J.; Leyton, M.; Li, H.; Li, S.; Li, X.; Liang, Z.; Liang, Z.; Liberti, B.; Lichard, P.; Lichtnecker, M.; Lie, K.; Liebig, W.; Lilley, J.N.; Lim, H.; Limosani, A.; Limper, M.; Lin, S.C.; Linnemann, J.T.; Lipeles, E.; Lipinsky, L.; Lipniacka, A.; Liss, T.M.; Lissauer, D.; Lister, A.; Litke, A.M.; Liu, C.; Liu, D.; Liu, H.; Liu, J.B.; Liu, M.; Liu, T.; Liu, Y.; Livan, M.; Lleres, A.; Lloyd, S.L.; Lobodzinska, E.; Loch, P.; Lockman, W.S.; Lockwitz, S.; Loddenkoetter, T.; Loebinger, F.K.; Loginov, A.; Loh, C.W.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, R.E.; Lopes, L.; Lopez Mateos, D.; Losada, M.; Loscutoff, P.; Lou, X.; Lounis, A.; Loureiro, K.F.; Lovas, L.; Love, J.; Love, P.A.; Lowe, A.J.; Lu, F.; Lubatti, H.J.; Luci, C.; Lucotte, A.; Ludwig, A.; Ludwig, D.; Ludwig, I.; Luehring, F.; Luisa, L.; Lumb, D.; Luminari, L.; Lund, E.; Lund-Jensen, B.; Lundberg, B.; Lundberg, J.; Lundquist, J.; Lynn, D.; Lys, J.; Lytken, E.; Ma, H.; Ma, L.L.; Macana Goia, J.A.; Maccarrone, G.; Macchiolo, A.; Macek, B.; Machado Miguens, J.; Mackeprang, R.; Madaras, R.J.; Mader, W.F.; Maenner, R.; Maeno, T.; Mattig, P.; Mattig, S.; Magalhaes Martins, P.J.; Magradze, E.; Mahalalel, Y.; Mahboubi, K.; Mahmood, A.; Maiani, C.; Maidantchik, C.; Maio, A.; Majewski, S.; Makida, Y.; Makouski, M.; Makovec, N.; Malecki, Pa.; Malecki, P.; Maleev, V.P.; Malek, F.; Mallik, U.; Malon, D.; Maltezos, S.; Malyshev, V.; Malyukov, S.; Mambelli, M.; Mameghani, R.; Mamuzic, J.; Mandelli, L.; Mandic, I.; Mandrysch, R.; Maneira, J.; Mangeard, P.S.; Manjavidze, I.D.; Manning, P.M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mapelli, A.; Mapelli, L.; March, L.; Marchand, J.F.; Marchese, F.; Marchiori, G.; Marcisovsky, M.; Marino, C.P.; Marroquim, F.; Marshall, Z.; Marti-Garcia, S.; Martin, A.J.; Martin, A.J.; Martin, B.; Martin, B.; Martin, F.F.; Martin, J.P.; Martin, T.A.; Martin dit Latour, B.; Martinez, M.; Martinez Outschoorn, V.; Martini, A.; Martyniuk, A.C.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A.L.; Massa, I.; Massol, N.; Mastroberardino, A.; Masubuchi, T.; Matricon, P.; Matsunaga, H.; Matsushita, T.; Mattravers, C.; Maxfield, S.J.; Mayne, A.; Mazini, R.; Mazur, M.; Mazzanti, M.; Mc Donald, J.; Mc Kee, S.P.; McCarn, A.; McCarthy, R.L.; McCubbin, N.A.; McFarlane, K.W.; McGlone, H.; Mchedlidze, G.; McMahon, S.J.; McPherson, R.A.; Meade, A.; Mechnich, J.; Mechtel, M.; Medinnis, M.; Meera-Lebbai, R.; Meguro, T.M.; Mehlhase, S.; Mehta, A.; Meier, K.; Meirose, B.; Melachrinos, C.; Mellado Garcia, B.R.; Mendoza Navas, L.; Meng, Z.; Menke, S.; Meoni, E.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F.S.; Messina, A.M.; Metcalfe, J.; Mete, A.S.; Meyer, J-P.; Meyer, J.; Meyer, J.; Meyer, T.C.; Meyer, W.T.; Miao, J.; Michal, S.; Micu, L.; Middleton, R.P.; Migas, S.; Mijovic, L.; Mikenberg, G.; Mikestikova, M.; Mikuz, M.; Miller, D.W.; Mills, W.J.; Mills, C.M.; Milov, A.; Milstead, D.A.; Milstein, D.; Minaenko, A.A.; Minano, M.; Minashvili, I.A.; Mincer, A.I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L.M.; Mirabelli, G.; Misawa, S.; Miscetti, S.; Misiejuk, A.; Mitrevski, J.; Mitsou, V.A.; Miyagawa, P.S.; Mjornmark, J.U.; Mladenov, D.; Moa, T.; Moed, S.; Moeller, V.; Monig, K.; Moser, N.; Mohr, W.; Mohrdieck-Mock, S.; Moles-Valls, R.; Molina-Perez, J.; Monk, J.; Monnier, E.; Montesano, S.; Monticelli, F.; Moore, R.W.; Mora Herrera, C.; Moraes, A.; Morais, A.; Morel, J.; Morello, G.; Moreno, D.; Moreno Llacer, M.; Morettini, P.; Morii, M.; Morley, A.K.; Mornacchi, G.; Morozov, S.V.; Morris, J.D.; Moser, H.G.; Mosidze, M.; Moss, J.; Mount, R.; Mountricha, E.; Mouraviev, S.V.; Moyse, E.J.W.; Mudrinic, M.; Mueller, F.; Mueller, J.; Mueller, K.; Muller, T.A.; Muenstermann, D.; Muir, A.; Munwes, Y.; Murillo Garcia, R.; Murray, W.J.; Mussche, I.; Musto, E.; Myagkov, A.G.; Myska, M.; Nadal, J.; Nagai, K.; Nagano, K.; Nagasaka, Y.; Nairz, A.M.; Nakamura, K.; Nakano, I.; Nakatsuka, H.; Nanava, G.; Napier, A.; Nash, M.; Nation, N.R.; Nattermann, T.; Naumann, T.; Navarro, G.; Nderitu, S.K.; Neal, H.A.; Nebot, E.; Nechaeva, P.; Negri, A.; Negri, G.; Nelson, A.; Nelson, T.K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A.A.; Nessi, M.; Neubauer, M.S.; Neusiedl, A.; Neves, R.N.; Nevski, P.; Newcomer, F.M.; Nickerson, R.B.; Nicolaidou, R.; Nicolas, L.; Nicoletti, G.; Nicquevert, B.; Niedercorn, F.; Nielsen, J.; Nikiforov, A.; Nikolaev, K.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, H.; Nilsson, P.; Nisati, A.; Nishiyama, T.; Nisius, R.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Nordberg, M.; Nordkvist, B.; Notz, D.; Novakova, J.; Nozaki, M.; Nozicka, M.; Nugent, I.M.; Nuncio-Quiroz, A.E.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; O'Neil, D.C.; O'Shea, V.; Oakham, F.G.; Oberlack, H.; Ochi, A.; Oda, S.; Odaka, S.; Odier, J.; Ogren, H.; Oh, A.; Oh, S.H.; Ohm, C.C.; Ohshima, T.; Ohshita, H.; Ohsugi, T.; Okada, S.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olchevski, A.G.; Oliveira, M.; Oliveira Damazio, D.; Oliver, J.; Oliver Garcia, E.; Olivito, D.; Olszewski, A.; Olszowska, J.; Omachi, C.; Onofre, A.; Onyisi, P.U.E.; Oram, C.J.; Oreglia, M.J.; Oren, Y.; Orestano, D.; Orlov, I.; Oropeza Barrera, C.; Orr, R.S.; Ortega, E.O.; Osculati, B.; Ospanov, R.; Osuna, C.; Ottersbach, J.P; Ould-Saada, F.; Ouraou, A.; Ouyang, Q.; Owen, M.; Owen, S.; Oyarzun, A; Ozcan, V.E.; Ozone, K.; Ozturk, N.; Pacheco Pages, A.; Padilla Aranda, C.; Paganis, E.; Pahl, C.; Paige, F.; Pajchel, K.; Palestini, S.; Pallin, D.; Palma, A.; Palmer, J.D.; Pan, Y.B.; Panagiotopoulou, E.; Panes, B.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Panuskova, M.; Paolone, V.; Papadopoulou, Th.D.; Park, S.J.; Park, W.; Parker, M.A.; Parker, S.I.; Parodi, F.; Parsons, J.A.; Parzefall, U.; Pasqualucci, E.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pasztor, G.; Pataraia, S.; Pater, J.R.; Patricelli, S.; Patwa, A.; Pauly, T.; Peak, L.S.; Pecsy, M.; Pedraza Morales, M.I.; Peleganchuk, S.V.; Peng, H.; Penson, A.; Penwell, J.; Perantoni, M.; Perez, K.; Perez Codina, E.; Perez Garcia-Estan, M.T.; Perez Reale, V.; Perini, L.; Pernegger, H.; Perrino, R.; Persembe, S.; Perus, P.; Peshekhonov, V.D.; Petersen, B.A.; Petersen, T.C.; Petit, E.; Petridou, C.; Petrolo, E.; Petrucci, F.; Petschull, D; Petteni, M.; Pezoa, R.; Phan, A.; Phillips, A.W.; Piacquadio, G.; Piccinini, M.; Piegaia, R.; Pilcher, J.E.; Pilkington, A.D.; Pina, J.; Pinamonti, M.; Pinfold, J.L.; Pinto, B.; Pizio, C.; Placakyte, R.; Plamondon, M.; Pleier, M.A.; Poblaguev, A.; Poddar, S.; Podlyski, F.; Poffenberger, P.; Poggioli, L.; Pohl, M.; Polci, F.; Polesello, G.; Policicchio, A.; Polini, A.; Poll, J.; Polychronakos, V.; Pomeroy, D.; Pommes, K.; Ponsot, P.; Pontecorvo, L.; Pope, B.G.; Popeneciu, G.A.; Popovic, D.S.; Poppleton, A.; Popule, J.; Portell Bueso, X.; Porter, R.; Pospelov, G.E.; Pospisil, S.; Potekhin, M.; Potrap, I.N.; Potter, C.J.; Potter, C.T.; Potter, K.P.; Poulard, G.; Poveda, J.; Prabhu, R.; Pralavorio, P.; Prasad, S.; Pravahan, R.; Pribyl, L.; Price, D.; Price, L.E.; Prichard, P.M.; Prieur, D.; Primavera, M.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Prudent, X.; Przysiezniak, H.; Psoroulas, S.; Ptacek, E.; Puigdengoles, C.; Purdham, J.; Purohit, M.; Puzo, P.; Pylypchenko, Y.; Qi, M.; Qian, J.; Qian, W.; Qin, Z.; Quadt, A.; Quarrie, D.R.; Quayle, W.B.; Quinonez, F.; Raas, M.; Radeka, V.; Radescu, V.; Radics, B.; Rador, T.; Ragusa, F.; Rahal, G.; Rahimi, A.M.; Rajagopalan, S.; Rammensee, M.; Rammes, M.; Rauscher, F.; Rauter, E.; Raymond, M.; Read, A.L.; Rebuzzi, D.M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Reinherz-Aronis, E.; Reinsch, A; Reisinger, I.; Reljic, D.; Rembser, C.; Ren, Z.L.; Renkel, P.; Rescia, S.; Rescigno, M.; Resconi, S.; Resende, B.; Reznicek, P.; Rezvani, R.; Richards, A.; Richards, R.A.; Richter, R.; Richter-Was, E.; Ridel, M.; Rijpstra, M.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Rios, R.R.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Roa Romero, D.A.; Robertson, S.H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, JEM; Robinson, M.; Robson, A.; Rocha de Lima, J.G.; Roda, C.; Roda Dos Santos, D.; Rodriguez, D.; Rodriguez Garcia, Y.; Roe, S.; Rohne, O.; Rojo, V.; Rolli, S.; Romaniouk, A.; Romanov, V.M.; Romeo, G.; Romero Maltrana, D.; Roos, L.; Ros, E.; Rosati, S.; Rosenbaum, G.A.; Rosselet, L.; Rossetti, V.; Rossi, L.P.; Rotaru, M.; Rothberg, J.; Rousseau, D.; Royon, C.R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Ruckert, B.; Ruckstuhl, N.; Rud, V.I.; Rudolph, G.; Ruhr, F.; Ruggieri, F.; Ruiz-Martinez, A.; Rumyantsev, L.; Rurikova, Z.; Rusakovich, N.A.; Rutherfoord, J.P.; Ruwiedel, C.; Ruzicka, P.; Ryabov, Y.F.; Ryan, P.; Rybkin, G.; Rzaeva, S.; Saavedra, A.F.; Sadrozinski, H.F-W.; Sadykov, R.; Sakamoto, H.; Salamanna, G.; Salamon, A.; Saleem, M.S.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvachua Ferrando, B.M.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sampsonidis, D.; Samset, B.H.; Sandaker, H.; Sander, H.G.; Sanders, M.P.; Sandhoff, M.; Sandhu, P.; Sandstroem, R.; Sandvoss, S.; Sankey, D.P.C.; Sanny, B.; Sansoni, A.; Santamarina Rios, C.; Santoni, C.; Santonico, R.; Saraiva, J.G.; Sarangi, T.; Sarkisyan-Grinbaum, E.; Sarri, F.; Sasaki, O.; Sasao, N.; Satsounkevitch, I.; Sauvage, G.; Savard, P.; Savine, A.Y.; Savinov, V.; Sawyer, L.; Saxon, D.H.; Says, L.P.; Sbarra, C.; Sbrizzi, A.; Scannicchio, D.A.; Schaarschmidt, J.; Schacht, P.; Schafer, U.; Schaetzel, S.; Schaffer, A.C.; Schaile, D.; Schamberger, R.D.; Schamov, A.G.; Schegelsky, V.A.; Scheirich, D.; Schernau, M.; Scherzer, M.I.; Schiavi, C.; Schieck, J.; Schioppa, M.; Schlenker, S.; Schmidt, E.; Schmieden, K.; Schmitt, C.; Schmitz, M.; Schott, M.; Schouten, D.; Schovancova, J.; Schram, M.; Schreiner, A.; Schroeder, C.; Schroer, N.; Schroers, M.; Schultes, J.; Schultz-Coulon, H.C.; Schumacher, J.W.; Schumacher, M.; Schumm, B.A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwemling, Ph.; Schwienhorst, R.; Schwierz, R.; Schwindling, J.; Scott, W.G.; Searcy, J.; Sedykh, E.; Segura, E.; Seidel, S.C.; Seiden, A.; Seifert, F.; Seixas, J.M.; Sekhniaidze, G.; Seliverstov, D.M.; Sellden, B.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Seuster, R.; Severini, H.; Sevior, M.E.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L.Y.; Shank, J.T.; Shao, Q.T.; Shapiro, M.; Shatalov, P.B.; Shaw, K.; Sherman, D.; Sherwood, P.; Shibata, A.; Shimojima, M.; Shin, T.; Shmeleva, A.; Shochet, M.J.; Shupe, M.A.; Sicho, P.; Sidoti, A.; Siegert, F; Siegrist, J.; Sijacki, Dj.; Silbert, O.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S.B.; Simak, V.; Simic, Lj.; Simion, S.; Simmons, B.; Simonyan, M.; Sinervo, P.; Sinev, N.B.; Sipica, V.; Siragusa, G.; Sisakyan, A.N.; Sivoklokov, S.Yu.; Sjoelin, J.; Sjursen, T.B.; Skovpen, K.; Skubic, P.; Slater, M.; Slavicek, T.; Sliwa, K.; Sloper, J.; Sluka, T.; Smakhtin, V.; Smirnov, S.Yu.; Smirnov, Y.; Smirnova, L.N.; Smirnova, O.; Smith, B.C.; Smith, D.; Smith, K.M.; Smizanska, M.; Smolek, K.; Snesarev, A.A.; Snow, S.W.; Snow, J.; Snuverink, J.; Snyder, S.; Soares, M.; Sobie, R.; Sodomka, J.; Soffer, A.; Solans, C.A.; Solar, M.; Solc, J.; Solfaroli Camillocci, E.; Solodkov, A.A.; Solovyanov, O.V.; Soluk, R.; Sondericker, J.; Sopko, V.; Sopko, B.; Sosebee, M.; Soukharev, A.; Spagnolo, S.; Spano, F.; Spencer, E.; Spighi, R.; Spigo, G.; Spila, F.; Spiwoks, R.; Spousta, M.; Spreitzer, T.; Spurlock, B.; St. Denis, R.D.; Stahl, T.; Stahlman, J.; Stamen, R.; Stancu, S.N.; Stanecka, E.; Stanek, R.W.; Stanescu, C.; Stapnes, S.; Starchenko, E.A.; Stark, J.; Staroba, P.; Starovoitov, P.; Stastny, J.; Stavina, P.; Steele, G.; Steinbach, P.; Steinberg, P.; Stekl, I.; Stelzer, B.; Stelzer, H.J.; Stelzer-Chilton, O.; Stenzel, H.; Stevenson, K.; Stewart, G.A.; Stockton, M.C.; Stoerig, K.; Stoicea, G.; Stonjek, S.; Strachota, P.; Stradling, A.R.; Straessner, A.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, M.; Strizenec, P.; Strohmer, R.; Strom, D.M.; Stroynowski, R.; Strube, J.; Stugu, B.; Soh, D.A.; Su, D.; Sugaya, Y.; Sugimoto, T.; Suhr, C.; Suk, M.; Sulin, V.V.; Sultansoy, S.; Sumida, T.; Sun, X.H.; Sundermann, J.E.; Suruliz, K.; Sushkov, S.; Susinno, G.; Sutton, M.R.; Suzuki, T.; Suzuki, Y.; Sykora, I.; Sykora, T.; Szymocha, T.; Sanchez, J.; Ta, D.; Tackmann, K.; Taffard, A.; Tafirout, R.; Taga, A.; Takahashi, Y.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Talby, M.; Talyshev, A.; Tamsett, M.C.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tapprogge, S.; Tardif, D.; Tarem, S.; Tarrade, F.; Tartarelli, G.F.; Tas, P.; Tasevsky, M.; Tassi, E.; Tatarkhanov, M.; Taylor, C.; Taylor, F.E.; Taylor, G.N.; Taylor, R.P.; Taylor, W.; Teixeira-Dias, P.; Ten Kate, H.; Teng, P.K.; Tennenbaum-Katan, Y.D.; Terada, S.; Terashi, K.; Terron, J.; Terwort, M.; Testa, M.; Teuscher, R.J.; Thioye, M.; Thoma, S.; Thomas, J.P.; Thompson, E.N.; Thompson, P.D.; Thompson, P.D.; Thompson, R.J.; Thompson, A.S.; Thomson, E.; Thun, R.P.; Tic, T.; Tikhomirov, V.O.; Tikhonov, Y.A.; Tipton, P.; Tique Aires Viegas, F.J.; Tisserant, S.; Toczek, B.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokar, S.; Tokushuku, K.; Tollefson, K.; Tomasek, L.; Tomasek, M.; Tomoto, M.; Tompkins, L.; Toms, K.; Tonoyan, A.; Topfel, C.; Topilin, N.D.; Torrence, E.; Torro Pastor, E.; Toth, J.; Touchard, F.; Tovey, D.R.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I.M.; Trincaz-Duvoid, S.; Trinh, T.N.; Tripiana, M.F.; Triplett, N.; Trischuk, W.; Trivedi, A.; Trocme, B.; Troncon, C.; Trzupek, A.; Tsarouchas, C.; Tseng, J.C-L.; Tsiakiris, M.; Tsiareshka, P.V.; Tsionou, D.; Tsipolitis, G.; Tsiskaridze, V.; Tskhadadze, E.G.; Tsukerman, I.I.; Tsulaia, V.; Tsung, J.W.; Tsuno, S.; Tsybychev, D.; Tuggle, J.M.; Turecek, D.; Turk Cakir, I.; Turlay, E.; Tuts, P.M.; Twomey, M.S.; Tylmad, M.; Tyndel, M.; Uchida, K.; Ueda, I.; Ugland, M.; Uhlenbrock, M.; Uhrmacher, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Unno, Y.; Urbaniec, D.; Urkovsky, E.; Urquijo, P.; Urrejola, P.; Usai, G.; Uslenghi, M.; Vacavant, L.; Vacek, V.; Vachon, B.; Vahsen, S.; Valente, P.; Valentinetti, S.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J.A.; Van Berg, R.; van der Graaf, H.; van der Kraaij, E.; van der Poel, E.; van der Ster, D.; van Eldik, N.; van Gemmeren, P.; van Kesteren, Z.; van Vulpen, I.; Vandelli, W.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Vari, R.; Varnes, E.W.; Varouchas, D.; Vartapetian, A.; Varvell, K.E.; Vasilyeva, L.; Vassilakopoulos, V.I.; Vazeille, F.; Vellidis, C.; Veloso, F.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J.C.; Vetterli, M.C.; Vichou, I.; Vickey, T.; Viehhauser, G.H.A.; Villa, M.; Villani, E.G.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M.G.; Vinek, E.; Vinogradov, V.B.; Viret, S.; Virzi, J.; Vitale, A.; Vitells, O.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vlasak, M.; Vlasov, N.; Vogel, A.; Vokac, P.; Volpi, M.; von der Schmitt, H.; von Loeben, J.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorwerk, V.; Vos, M.; Voss, R.; Voss, T.T.; Vossebeld, J.H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vudragovic, D.; Vuillermet, R.; Vukotic, I.; Wagner, P.; Walbersloh, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Wang, C.; Wang, H.; Wang, J.; Wang, S.M.; Warburton, A.; Ward, C.P.; Warsinsky, M.; Wastie, R.; Watkins, P.M.; Watson, A.T.; Watson, M.F.; Watts, G.; Watts, S.; Waugh, A.T.; Waugh, B.M.; Weber, M.D.; Weber, M.; Weber, M.S.; Weber, P.; Weidberg, A.R.; Weingarten, J.; Weiser, C.; Wellenstein, H.; Wells, P.S.; Wen, M.; Wenaus, T.; Wendler, S.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Werth, M.; Werthenbach, U.; Wessels, M.; Whalen, K.; White, A.; White, M.J.; White, S.; Whitehead, S.R.; Whiteson, D.; Whittington, D.; Wicek, F.; Wicke, D.; Wickens, F.J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik, L.A.M.; Wildauer, A.; Wildt, M.A.; Wilkens, H.G.; Williams, E.; Williams, H.H.; Willocq, S.; Wilson, J.A.; Wilson, M.G.; Wilson, A.; Wingerter-Seez, I.; Winklmeier, F.; Wittgen, M.; Wolter, M.W.; Wolters, H.; Wosiek, B.K.; Wotschack, J.; Woudstra, M.J.; Wraight, K.; Wright, C.; Wright, D.; Wrona, B.; Wu, S.L.; Wu, X.; Wulf, E.; Wynne, B.M.; Xaplanteris, L.; Xella, S.; Xie, S.; Xu, D.; Xu, N.; Yamada, M.; Yamamoto, A.; Yamamoto, K.; Yamamoto, S.; Yamamura, T.; Yamaoka, J.; Yamazaki, T.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, U.K.; Yang, Z.; Yao, W-M.; Yao, Y.; Yasu, Y.; Ye, J.; Ye, S.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Young, C.; Youssef, S.P.; Yu, D.; Yu, J.; Yuan, L.; Yurkewicz, A.; Zaidan, R.; Zaitsev, A.M.; Zajacova, Z.; Zambrano, V.; Zanello, L.; Zaytsev, A.; Zeitnitz, C.; Zeller, M.; Zemla, A.; Zendler, C.; Zenin, O.; Zenis, T.; Zenonos, Z.; Zenz, S.; Zerwas, D.; Zevi della Porta, G.; Zhan, Z.; Zhang, H.; Zhang, J.; Zhang, Q.; Zhang, X.; Zhao, L.; Zhao, T.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, N.; Zhou, Y.; Zhu, C.G.; Zhu, H.; Zhu, Y.; Zhuang, X.; Zhuravlov, V.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Ziolkowski, M.; Zivkovic, L.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zutshi, V.

    2010-01-01

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, and the validation of the simulated output against known physics processes.

  3. Sustainable evolution of product line infrastructure code

    OpenAIRE

    Patzke, T.

    2011-01-01

    A major goal in many software development organizations today is to reduce development effort and cost, while improving their products' quality and diversity by developing reusable software. An organization takes advantage of its products' similarities, exploits what they have in common and manages what varies among them by building a product line infrastructure. A product line infrastructure is a reuse repository that contains exactly those common and variable artifacts, such as requirements...

  4. The Legal Dimension of RTI--Confusion Confirmed: A Response to Walker and Daves

    Science.gov (United States)

    Zirkel, Perry A.

    2012-01-01

    In this issue of "Learning Disability Quarterly" (LDQ), Professors Daves and Walker reply to my earlier LDQ article on confusion in the cases and commentary about the legal dimension of RTI. In this brief rejoinder, I show that their reply confirms rather than resolves the confusion in their original commentary in 2010. This persistent…

  5. Continuing progress on a lattice QCD software infrastructure

    International Nuclear Information System (INIS)

    Joo, B

    2008-01-01

    We report on the progress of the software effort in the QCD application area of SciDAC. In particular, we discuss how the software developed under SciDAC enabled the aggressive exploitation of leadership computers, and we report on progress in the area of QCD software for multi-core architectures

  6. Educators' Year Long Reactions to the Implementation of a Response to Intervention (RTI) Model

    Science.gov (United States)

    Sanger, Dixie; Friedli, Corey; Brunken, Cindy; Snow, Pamela; Ritzman, Mitzi

    2012-01-01

    Mixed methods were used to explore the reactions of educators before and after implementing the Response to Intervention (RTI) model in secondary settings during a school year. Eighteen participants from six middle schools and four high schools collaborated on interdisciplinary teams that involved classroom teachers, speech-language pathologists…

  7. Cloud Infrastructure & Applications - CloudIA

    Science.gov (United States)

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  8. Electricity Infrastructure Operations Center (EIOC)

    Data.gov (United States)

    Federal Laboratory Consortium — The Electricity Infrastructure Operations Center (EIOC) at PNNL brings together industry-leading software, real-time grid data, and advanced computation into a fully...

  9. Requirements Engineering in Building Climate Science Software

    Science.gov (United States)

    Batcheller, Archer L.

    Software has an important role in supporting scientific work. This dissertation studies teams that build scientific software, focusing on the way that they determine what the software should do. These requirements engineering processes are investigated through three case studies of climate science software projects. The Earth System Modeling Framework assists modeling applications, the Earth System Grid distributes data via a web portal, and the NCAR (National Center for Atmospheric Research) Command Language is used to convert, analyze and visualize data. Document analysis, observation, and interviews were used to investigate the requirements-related work. The first research question is about how and why stakeholders engage in a project, and what they do for the project. Two key findings arise. First, user counts are a vital measure of project success, which makes adoption important and makes counting tricky and political. Second, despite the importance of quantities of users, a few particular "power users" develop a relationship with the software developers and play a special role in providing feedback to the software team and integrating the system into user practice. The second research question focuses on how project objectives are articulated and how they are put into practice. The team seeks to both build a software system according to product requirements but also to conduct their work according to process requirements such as user support. Support provides essential communication between users and developers that assists with refining and identifying requirements for the software. It also helps users to learn and apply the software to their real needs. User support is a vital activity for scientific software teams aspiring to create infrastructure. The third research question is about how change in scientific practice and knowledge leads to changes in the software, and vice versa. The "thickness" of a layer of software infrastructure impacts whether the

  10. Computational Infrastructure for Geodynamics (CIG)

    Science.gov (United States)

    Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.

    2004-12-01

    Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to

  11. Co-infection of herpes simplex virus (HSV) with human immunodeficiency virus (HIV) in women with reproductive tract infections (RTI).

    Science.gov (United States)

    Devi, Ksh Mamta; Devi, Kh Sulochana; Singh, Ng Brajachand; Singh, N Nabakishore; Singh, I Dorendra

    2008-09-01

    In India, HSV seroprevalence and its coinfection with HIV among female patients with reproductive tract infections (RTI) are sparse. We aim to ascertain the seroprevalence of HSV and its coinfection with HIV and common sexually transmitted infections attending Obstetrics and Gynaecology outpatient department, RIMS. The study included 92 female patients with RTI. Diagnostic serology was done for HSV-1 and HSV-2 using group specific IgM indirect immunoassay using ELISA, HIV by 3 ELISA/Rapid/Simple (E/R/S) test of different biological antigen. Diagnosis of RTI was made on clinical grounds with appropriate laboratory investigations--microscopy, Gram stain smear etc. Bacterial vaginosis was diagnosed using Nugent's criteria, Syphilis by rapid plasma reagin (RPR) card test and Chlamydia trachomatis by IgG ELISA. Out of 92 sera tested for HSV, 18 (19.6%) were IgM HSV positive and 9 (9.8%) were HIV positive. Co-infection rate of HSV in HIV positive was 16.7%. None of the patients had clinical herpes genitalis, all were subclinical cases. 55.5% of HSV positives belongs to age group 21 to 30 years. Of the HSV-1 and HSV-2 IgM positives 3 (15%) had HIV, 4 (22.2%) bacterial vaginosis, 2 (11.1%) were RPR positive, 4 (22.2%) Chlamydia trachomatis, 3 (15%) were pregnant. 16 (88.8%) were unemployed, 14 (77.7%) had education level below 10 standard. Our study suggest that every case of RTI, be it an ulcerative or nonulcerative must be thoroughly evaluated by laboratory testing for primary subclinical genital HSV coinfection as this has profound implications on their judicious management and aversion of complications. Early diagnosis and treatment of HSV infection together with prophylaxis for recurrent HSV disease will prevent progression and spread of HIV disease.

  12. Supporting Valid Decision Making: Uses and Misuses of Assessment Data within the Context of RtI

    Science.gov (United States)

    Ball, Carrie R.; Christ, Theodore J.

    2012-01-01

    Within an RtI problem-solving context, assessment and decision making generally center around the tasks of problem identification, problem analysis, progress monitoring, and program evaluation. We use this framework to discuss the current state of the literature regarding curriculum based measurement, its technical properties, and its utility for…

  13. Telling the story of ancient coins by means of interactive RTI images visualization

    OpenAIRE

    Palma, Gianpaolo; Siotto, Eliana; Proesmans, Marc; Baldassarri, Monica; Baracchini, Clara; Batino, Sabrina; Scopigno, Roberto

    2012-01-01

    Methodologies for virtual examination of Cultural Heritage artifacts through Reflectance Transformation Imaging (RTI) are gaining interest. Although at the beginning this techniques were designed to aid Cultural Heritage specialists in the inspection and interpretation process, the recent advances of 3D web visualization platforms are increasing our capability to open this type of visual inspection to the ordinary public. We present the design and implementation of a system that provide the a...

  14. THE STUDY OF THE FORECASTING PROCESS INFRASTRUCTURAL SUPPORT BUSINESS

    Directory of Open Access Journals (Sweden)

    E. V. Sibirskaia

    2014-01-01

    Full Text Available Summary. When forecasting the necessary infrastructural support entrepreneurship predict rational distribution of the potential and expected results based on capacity development component of infrastructural maintenance, efficient use of resources, expertise and development of regional economies, the rationalization of administrative decisions, etc. According to the authors, the process of predicting business infrastructure software includes the following steps: analysis of the existing infrastructure support business to the top of the forecast period, the structure of resources, identifying disparities, their causes, identifying positive trends in the analysis and the results of research; research component of infrastructural support entrepreneurship, assesses complex system of social relations, institutions, structures and objects made findings and conclusions of the study; identification of areas of strategic change and the possibility of eliminating weaknesses and imbalances, identifying prospects for the development of entrepreneurship; identifying a set of factors and conditions affecting each component of infrastructure software, calculated the degree of influence of each of them and the total effect of all factors; adjustment indicators infrastructure forecasts. Research of views of category says a method of strategic planning and forecasting that methods of strategic planning are considered separately from forecasting methods. In a combination methods of strategic planning and forecasting, in relation to infrastructure ensuring business activity aren't given in literature. Nevertheless, authors consider that this category should be defined for the characteristic of the intrinsic and substantial nature of strategic planning and forecasting of infrastructure ensuring business activity.processing.

  15. Optimizing infrastructure for software testing using virtualization

    International Nuclear Information System (INIS)

    Khalid, O.; Shaikh, A.; Copy, B.

    2012-01-01

    Virtualization technology and cloud computing have brought a paradigm shift in the way we utilize, deploy and manage computer resources. They allow fast deployment of multiple operating system as containers on physical machines which can be either discarded after use or check-pointed for later re-deployment. At European Organization for Nuclear Research (CERN), we have been using virtualization technology to quickly setup virtual machines for our developers with pre-configured software to enable them to quickly test/deploy a new version of a software patch for a given application. This paper reports both on the techniques that have been used to setup a private cloud on a commodity hardware and also presents the optimization techniques we used to remove deployment specific performance bottlenecks. (authors)

  16. Strengthening Software Authentication with the ROSE Software Suite

    International Nuclear Information System (INIS)

    White, G

    2006-01-01

    Many recent nonproliferation and arms control software projects include a software authentication regime. These include U.S. Government-sponsored projects both in the United States and in the Russian Federation (RF). This trend toward requiring software authentication is only accelerating. Demonstrating assurance that software performs as expected without hidden ''backdoors'' is crucial to a project's success. In this context, ''authentication'' is defined as determining that a software package performs only its intended purpose and performs said purpose correctly and reliably over the planned duration of an agreement. In addition to visual inspections by knowledgeable computer scientists, automated tools are needed to highlight suspicious code constructs, both to aid visual inspection and to guide program development. While many commercial tools are available for portions of the authentication task, they are proprietary and not extensible. An open-source, extensible tool can be customized to the unique needs of each project (projects can have both common and custom rules to detect flaws and security holes). Any such extensible tool has to be based on a complete language compiler. ROSE is precisely such a compiler infrastructure developed within the Department of Energy (DOE) and targeted at the optimization of scientific applications and user-defined libraries within large-scale applications (typically applications of a million lines of code). ROSE is a robust, source-to-source analysis and optimization infrastructure currently addressing large, million-line DOE applications in C and C++ (handling the full C, C99, C++ languages and with current collaborations to support Fortran90). We propose to extend ROSE to address a number of security-specific requirements, and apply it to software authentication for nonproliferation and arms control projects

  17. Infrastructure for Multiphysics Software Integration in High Performance Computing-Aided Science and Engineering

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Michael T. [Illinois Rocstar LLC, Champaign, IL (United States); Safdari, Masoud [Illinois Rocstar LLC, Champaign, IL (United States); Kress, Jessica E. [Illinois Rocstar LLC, Champaign, IL (United States); Anderson, Michael J. [Illinois Rocstar LLC, Champaign, IL (United States); Horvath, Samantha [Illinois Rocstar LLC, Champaign, IL (United States); Brandyberry, Mark D. [Illinois Rocstar LLC, Champaign, IL (United States); Kim, Woohyun [Illinois Rocstar LLC, Champaign, IL (United States); Sarwal, Neil [Illinois Rocstar LLC, Champaign, IL (United States); Weisberg, Brian [Illinois Rocstar LLC, Champaign, IL (United States)

    2016-10-15

    . There are over 100 unit tests provided that run through the Illinois Rocstar Application Development (IRAD) lightweight testing infrastructure that is also supplied along with IMPACT. The package as a whole provides an excellent base for developing high-quality multiphysics applications using modern software development practices. To facilitate understanding how to utilize IMPACT effectively, two multiphysics systems have been developed and are available open-source through gitHUB. The simpler of the two systems, named ElmerFoamFSI in the repository, is a multiphysics, fluid-structure-interaction (FSI) coupling of the solid mechanics package Elmer with a fluid dynamics module from OpenFOAM. This coupling illustrates how to combine software packages that are unrelated by either author or architecture and combine them into a robust, parallel multiphysics system. A more complex multiphysics tool is the Illinois Rocstar Rocstar Multiphysics code that was rebuilt during the project around IMPACT. Rocstar Multiphysics was already an HPC multiphysics tool, but now that it has been rearchitected around IMPACT, it can be readily expanded to capture new and different physics in the future. In fact, during this project, the Elmer and OpenFOAM tools were also coupled into Rocstar Multiphysics and demonstrated. The full Rocstar Multiphysics codebase is also available on gitHUB, and licensed for any organization to use as they wish. Finally, the new IMPACT product is already being used in several multiphysics code coupling projects for the Air Force, NASA and the Missile Defense Agency, and initial work on expansion of the IMPACT-enabled Rocstar Multiphysics has begun in support of a commercial company. These initiatives promise to expand the interest and reach of IMPACT and Rocstar Multiphysics, ultimately leading to the envisioned standardization and consortium of users that was one of the goals of this project.

  18. Response to Intervention (RtI) in the Social, Emotional, and Behavioral Domains: Current Challenges and Emerging Possibilities

    Science.gov (United States)

    Saeki, Elina; Jimerson, Shane R.; Earhart, James; Hart, Shelley R.; Renshaw, Tyler; Singh, Renee D.; Stewart, Kaitlyn

    2011-01-01

    As many schools move toward a three-tier model that incorporates a Response to Intervention (RtI) service delivery model in the social, emotional, and behavioral domains, school psychologists may provide leadership. The decision-making process for filtering students through multiple tiers of support and intervention and examining change is an area…

  19. Software Development Infrastructure for the FAIR Experiments

    International Nuclear Information System (INIS)

    Uhlig, F; Al-Turany, M; Bertini, D; Karabowicz, R

    2011-01-01

    The proposed project FAIR (Facility for Anti-proton and Ion Research) is an international accelerator facility of the next generation. It builds on top of the experience and technological developments already made at the existing GSI facility, and incorporate new technological concepts. The four scientific pillars of FAIR are NUSTAR (nuclear structure and astrophysics), PANDA (QCD studies with cooled beams of anti-protons), CBM (physics of hadronic matter at highest baryon densities), and APPA (atomic physics, plasma physics, and applications). The FairRoot framework used by all of the big FAIR experiments as a base for their own specific developments, provides basic functionality like IO, geometry handling etc. The challenge is to support all the different experiments with their heterogeneous requirements. Due to the limited manpower, one of the first design decisions was to (re)use as much as possible already available and tested software and to focus on the development of the framework. Beside the framework itself, the FairRoot core team also provides some software development tools. We will describe the complete set of tools in this article. The Makefiles for all projects are generated using CMake. For software testing and the corresponding quality assurance, we use CTest to generate the results and CDash as web front end. The tools are completed by subversion as source code repository and trac as tool for the complete source code management. This set of tools allows us to offer the full functionality we have for FairRoot also to the experiments based on FairRoot.

  20. New Features in the Computational Infrastructure for Nuclear Astrophysics

    International Nuclear Information System (INIS)

    Smith, Michael Scott; Lingerfelt, Eric; Scott, J. P.; Nesaraja, Caroline D; Chae, Kyung YuK.; Koura, Hiroyuki; Roberts, Luke F.; Hix, William Raphael; Bardayan, Daniel W.; Blackmon, Jeff C.

    2006-01-01

    A Computational Infrastructure for Nuclear Astrophysics has been developed to streamline the inclusion of the latest nuclear physics data in astrophysics simulations. The infrastructure consists of a platform-independent suite of computer codes that are freely available online at http://nucastrodata.org. The newest features of, and future plans for, this software suite are given

  1. Infrastructure for Detector Research and Development towards the International Collider

    CERN Document Server

    Aguilar, J.; Fiutowski, T.; Idzik, M.; Kulis, Sz.; Przyborowski, D.; Swientek, K.; Bamberger, A.; Kohli, M.; Lupberger, M.; Renz, U.; Schumacher, M.; Zwerger, Andreas; Calderone, A.; Cussans, D.G.; Heath, H.F.; Mandry, S.; Page, R.F.; Velthuis, J.J.; Attie, D.; Calvet, D.; Colas, P.; Coppolani, X.; Degerli, Y.; Delagnes, E.; Gelin, M.; Giomataris, I.; Lutz, P.; Orsini, F.; Rialot, M.; Senee, F.; Wang, W.; Alozy, J.; Apostolakis, J.; Aspell, P.; Bergsma, F.; Campbell, M.; Formenti, F.; Santos, H.Franca; Garcia, E.Garcia; de Gaspari, M.; Giudice, P.A.; Grefe, Ch.; Grichine, V.; Hauschild, M.; Ivantchenko, V.; Kehrli, A.; Kloukinas, K.; Linssen, L.; Cudie, X.Llopart; Marchioro, A.; Musa, L.; Ribon, A.; Trampitsch, G.; Uzhinskiy, V.; Anduze, M.; Beyer, E.; Bonnemaison, A.; Boudry, V.; Brient, J.C.; Cauchois, A.; Clerc, C.; Cornat, R.; Frotin, M.; Gastaldi, F.; Jauffret, C.; Jeans, D.; Karar, A.; Mathieu, A.; de Freitas, P.Mora; Musat, G.; Rouge, A.; Ruan, M.; Vanel, J.C.; Videau, H.; Besson, A.; de Masi, G.Claus.R.; Doziere, G.; Dulinski, W.; Goffe, M.; Himmi, A.; Hu-Guo, Ch.; Morel, F.; Valin, I.; Winter, M.; Bonis, J.; Callier, S.; Cornebise, P.; Dulucq, F.; Giannelli, M.Faucci; Fleury, J.; Guilhem, G.; Martin-Chassard, G.; de la Taille, Ch.; Poschl, R.; Raux, L.; Seguin-Moreau, N.; Wicek, F.; Benyamna, M.; Bonnard, J.; Carloganu, C.; Fehr, F.; Gay, P.; Mannen, S.; Royer, L.; Charpy, A.; Da Silva, W.; David, J.; Dhellot, M.; Imbault, D.; Ghislain, P.; Kapusta, F.; Pham, T.Hung; Savoy-Navarro, A.; Sefri, R.; Dzahini, D.; Giraud, J.; Grondin, D.; Hostachy, J.Y.; Morin, L.; Bassignana, D.; Pellegrini, G.; Lozano, M.; Quirion, D.; Fernandez, M.; Jaramillo, R.; Munoz, F.J.; Vila, I.; Dolezal, Z.; Drasal, Z.; Kodys, P.; Kvasnicka, P.; Aplin, S.; Bachynska, O.; Behnke, T.; Behr, J.; Dehmelt, K.; Engels, J.; Gadow, K.; Gaede, F.; Garutti, E.; Gottlicher, P.; Gregor, I.M.; Haas, T.; Henschel, H.; Koetz, U.; Lange, W.; Libov, V.; Lohmann, W.; Lutz, B.; Mnich, J.; Muhl, C.; Ohlerich, M.; Potylitsina-Kube, N.; Prahl, V.; Reinecke, M.; Roloff, P.; Rosemann, Ch.; Rubinski, Igor; Schade, P.; Schuwalov, S.; Sefkow, F.; Terwort, M.; Volkenborn, R.; Kalliopuska, J.; Mehtaelae, P.; Orava, R.; van Remortel, N.; Cvach, J.; Janata, M.; Kvasnicka, J.; Marcisovsky, M.; Polak, I.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Bergauer, T.; Dragicevic, M.; Friedl, M.; Haensel, S.; Irmler, C.; Kiesenhofer, W.; Krammer, M.; Valentan, M.; Piemontese, L.; Cotta-Ramusino, A.; Bulgheroni, A.; Jastrzab, M.; Caccia, M.; Re, V.; Ratti, L.; Traversi, G.; Dewulf, J.P.; Janssen, X.; De Lentdecker, G.; Yang, Y.; Bryngemark, L.; Christiansen, P.; Gross, P.; Jonsson, L.; Ljunggren, M.; Lundberg, B.; Mjornmark, U.; Oskarsson, A.; Richert, T.; Stenlund, E.; Osterman, L.; Rummel, S.; Richter, R.; Andricek, L.; Ninkovich, J.; Koffmane, Ch.; Moser, H.G.; Boisvert, V.; Green, B.; Green, M.G.; Misiejuk, A.; Wu, T.; Bilevych, Y.; Carballo, V.M.Blanco; Chefdeville, M.; de Nooij, L.; Fransen, M.; Hartjes, F.; van der Graaf, H.; Timmermans, J.; Abramowicz, H.; Ben-Hamu, Y.; Jikhleb, I.; Kananov, S.; Levy, A.; Levy, I.; Sadeh, I.; Schwartz, R.; Stern, A.; Goodrick, M.J.; Hommels, L.B.A.; Ward, R.Shaw.D.R.; Daniluk, W.; Kielar, E.; Kotula, J.; Moszczynski, A.; Oliwa, K.; Pawlik, B.; Wierba, W.; Zawiejski, L.; Bailey, D.S.; Kelly, M.; Eigen, G.; Brezina, Ch.; Desch, K.; Furletova, J.; Kaminski, J.; Killenberg, M.; Kockner, F.; Krautscheid, T.; Kruger, H.; Reuen, L.; Wienemann, P.; Zimmermann, R.; Zimmermann, S.; Bartsch, V.; Postranecky, M.; Warren, M.; Wing, M.; Corrin, E.; Haas, D.; Pohl, M.; Diener, R.; Fischer, P.; Peric, I.; Kaukher, A.; Schafer, O.; Schroder, H.; Wurth, R.; Zarnecki, A.F.

    2012-01-01

    The EUDET-project was launched to create an infrastructure for developing and testing new and advanced detector technologies to be used at a future linear collider. The aim was to make possible experimentation and analysis of data for institutes, which otherwise could not be realized due to lack of resources. The infrastructure comprised an analysis and software network, and instrumentation infrastructures for tracking detectors as well as for calorimetry.

  2. Software and the future of programming languages.

    Science.gov (United States)

    Aho, Alfred V

    2004-02-27

    Although software is the key enabler of the global information infrastructure, the amount and extent of software in use in the world today are not widely understood, nor are the programming languages and paradigms that have been used to create the software. The vast size of the embedded base of existing software and the increasing costs of software maintenance, poor security, and limited functionality are posing significant challenges for the software R&D community.

  3. The EPOS e-Infrastructure

    Science.gov (United States)

    Jeffery, Keith; Bailo, Daniele

    2014-05-01

    The European Plate Observing System (EPOS) is integrating geoscientific information concerning earth movements in Europe. We are approaching the end of the PP (Preparatory Project) phase and in October 2014 expect to continue with the full project within ESFRI (European Strategic Framework for Research Infrastructures). The key aspects of EPOS concern providing services to allow homogeneous access by end-users over heterogeneous data, software, facilities, equipment and services. The e-infrastructure of EPOS is the heart of the project since it integrates the work on organisational, legal, economic and scientific aspects. Following the creation of an inventory of relevant organisations, persons, facilities, equipment, services, datasets and software (RIDE) the scale of integration required became apparent. The EPOS e-infrastructure architecture has been developed systematically based on recorded primary (user) requirements and secondary (interoperation with other systems) requirements through Strawman, Woodman and Ironman phases with the specification - and developed confirmatory prototypes - becoming more precise and progressively moving from paper to implemented system. The EPOS architecture is based on global core services (Integrated Core Services - ICS) which access thematic nodes (domain-specific European-wide collections, called thematic Core Services - TCS), national nodes and specific institutional nodes. The key aspect is the metadata catalog. In one dimension this is described in 3 levels: (1) discovery metadata using well-known and commonly used standards such as DC (Dublin Core) to enable users (via an intelligent user interface) to search for objects within the EPOS environment relevant to their needs; (2) contextual metadata providing the context of the object described in the catalog to enable a user or the system to determine the relevance of the discovered object(s) to their requirement - the context includes projects, funding, organisations

  4. Road traffic injuries in developing countries: research and action agenda

    OpenAIRE

    Huang, Cheng-Min; International Injury Research Unit, Department of International Health, Johns Hopkins Bloomberg School of Public Health. Baltimore, MD, USA. Médico, Magíster en Ciencias de la Salud.; Lunnen, Jeffrey C.; International Injury Research Unit, Department of International Health, Johns Hopkins Bloomberg School of Public Health. Baltimore, MD, USA. Candidato a Magíster en Estudios de la Mujer.; Miranda, J. Jaime; Programa de Investigación en Accidentes de Tránsito, Salud Sin Límites Perú. Lima, Perú. Facultad de Medicina, Universidad Peruana Cayetano Heredia. Lima, Perú. Médico, Magíster y Doctor en Epidemiología.; Hyder, Adnan A.; International Injury Research Unit, Department of International Health, Johns Hopkins Bloomberg School of Public Health. Baltimore, MD, USA. Médico Magíster y Doctor en Salud Pública.

    2010-01-01

    Road traffic injury (RTI) is the leading cause of death in persons aged 10-24 worldwide and accounts for about 15% of all male deaths. The burden of RTI is unevenly distributed amongst countries with over eighty-fold differences between the highest and lowest death rates. Thus the unequal risk of RTI occurring in the developing world, due to many reasons, including but not limited to rapid motorization and poor infrastructure, is a major global challenge. This editorial highlights a numbe...

  5. Upgrade Software and Computing

    CERN Document Server

    The LHCb Collaboration, CERN

    2018-01-01

    This document reports the Research and Development activities that are carried out in the software and computing domains in view of the upgrade of the LHCb experiment. The implementation of a full software trigger implies major changes in the core software framework, in the event data model, and in the reconstruction algorithms. The increase of the data volumes for both real and simulated datasets requires a corresponding scaling of the distributed computing infrastructure. An implementation plan in both domains is presented, together with a risk assessment analysis.

  6. Sustainable access to data, products, services and software from the European seismological Research Infrastructures: the EPOS TCS Seismology

    Science.gov (United States)

    Haslinger, Florian; Dupont, Aurelien; Michelini, Alberto; Rietbrock, Andreas; Sleeman, Reinoud; Wiemer, Stefan; Basili, Roberto; Bossu, Rémy; Cakti, Eser; Cotton, Fabrice; Crawford, Wayne; Diaz, Jordi; Garth, Tom; Locati, Mario; Luzi, Lucia; Pinho, Rui; Pitilakis, Kyriazis; Strollo, Angelo

    2016-04-01

    Easy, efficient and comprehensive access to data, data products, scientific services and scientific software is a key ingredient in enabling research at the frontiers of science. Organizing this access across the European Research Infrastructures in the field of seismology, so that it best serves user needs, takes advantage of state-of-the-art ICT solutions, provides cross-domain interoperability, and is organizationally and financially sustainable in the long term, is the core challenge of the implementation phase of the Thematic Core Service (TCS) Seismology within the EPOS-IP project. Building upon the existing European-level infrastructures ORFEUS for seismological waveforms, EMSC for seismological products, and EFEHR for seismological hazard and risk information, and implementing a pilot Computational Earth Science service starting from the results of the VERCE project, the work within the EPOS-IP project focuses on improving and extending the existing services, aligning them with global developments, to at the end produce a well coordinated framework that is technically, organizationally, and financially integrated with the EPOS architecture. This framework needs to respect the roles and responsibilities of the underlying national research infrastructures that are the data owners and main providers of data and products, and allow for active input and feedback from the (scientific) user community. At the same time, it needs to remain flexible enough to cope with unavoidable challenges in the availability of resources and dynamics of contributors. The technical work during the next years is organized in four areas: - constructing the next generation software architecture for the European Integrated (waveform) Data Archive EIDA, developing advanced metadata and station information services, fully integrate strong motion waveforms and derived parametric engineering-domain data, and advancing the integration of mobile (temporary) networks and OBS deployments in

  7. Development of a head and neck companion module for the quality of life - radiation therapy instrument (QOL-RTI)

    International Nuclear Information System (INIS)

    Trotti, Andy; Johnson, Darlene J.; Gwede, Clement; Casey, Linda; Cantor, Alan

    1997-01-01

    Purpose/Objective: The purpose of this study is to develop a Likert version, disease specific module for the Quality of Life - Radiation Therapy Instrument (QOL-RTI) for head and neck patients. This module combined with the QOL-RTI may help us to determine what changes, if any, occur in the quality of life of patients receiving primary radiation therapy for head and neck cancer. A compelling reason to study head and neck patients is because of the extensive difficulties these patients experience while undergoing radiotherapy. Their psychological and emotional well-being may also have an adverse affect on their quality of life as they strive to cope with both a life-threatening disease and with the prospect of disfigurement or dysfunction that cannot be hidden from view. Many tools have been developed to measure quality of life, however, previously published literature has focused on quantifying functional status and toxicity assessment rather than the alteration of quality of life as perceived by the patient. This study, limited to patients receiving primary radiation, is expected to yield a tool that is applicable to the patient receiving multi-modality treatment for head and neck cancer and will be useful to the physician/nurse in determining interventions for these patients. Materials and Methods: This study is an uncontrolled, non-randomized exploratory study, including fifty (50) consecutive eligible patients undergoing definitive primary radiotherapy. The questionnaires were given to all eligible patients being treated with radiotherapy for head and neck cancer who are not already registered on another research study assessing Quality of Life (i.e. RTOG). During the 5-8 week treatment period, the QOL-RTI/H and N instrument is administered as follows: at baseline evaluation (immediately prior to the beginning of radiotherapy), at week four (4) two days in a row (for test-retest) and at the end of the treatment period. For validation purposes the QOL-RTI/H and

  8. Development of a head and neck companion module for the quality of life-radiation therapy instrument (QOL-RTI)

    International Nuclear Information System (INIS)

    Trotti, Andy; Johnson, Darlene J.; Gwede, Clement; Casey, Linda; Sauder, Bonnie; Cantor, Alan; Pearlman, James

    1998-01-01

    Purpose/Objective: A review of available head and neck quality of life (QOL) instruments reveals them to inadequately address important radiation related side effects, or to be too cumbersome for routine use. The purpose of this study was to develop a head and neck disease specific module as a companion to the previously developed quality of life - radiation therapy instrument (QOL-RTI). The goal was to create a more complete, yet concise, head and neck site-specific module geared toward patients receiving radiation therapy for head and neck cancer. Methods and Materials: This exploratory study included 34 consecutive patients undergoing definitive radiotherapy over a 6-7 week course (60-79.8 Gy). We developed and administered a 14-item questionnaire to all eligible patients treated with radiotherapy for head and neck cancer who were not already registered in another research study assessing quality of life (e.g., RTOG). During the treatment period, the QOL-RTI general tool and the head and neck (H and N) module were administered as follows: at baseline, at week four (for test-retest), and at the end of the treatment period. For validation purposes the QOL-RTI/H and N was compared to the functional assessment cancer tool head and neck (FACT-H and N) questionnaire. The FACT-H and N was administered one time at week 4, on the same day as the QOL-RTI/H and N. This report includes the treatment phase of the study (during the course of radiation). Results: Mean age was 62 years (range 40-75). Internal consistency of the module was satisfactory (Chronbach's α = 0.85). Test-retest yielded a correlation coefficient of 0.90 (p < 0.001). Concurrent validity, established by comparing the module to the FACT/H and N , yielded a correlation coefficient of 0.85. Significant changes in quality of life scores during a course of radiation was noted for both general quality of life tool and the site specific module. For the head and neck module, the difference in the mean baseline

  9. Structured Cloud Federation for Carrier and ISP Infrastructure

    OpenAIRE

    Xhagjika, Vamis; Vlassov, Vladimir; Molin, Magnus; Toma, Simona

    2014-01-01

    Cloud Computing in recent years has seen enhanced growth and extensive support by the research community and industry. The advent of cloud computing realized the concept of commodity computing, in which infrastructure (resources) can be allocated on demand giving the illusion of infinite resource availability. The state-of-art Carrier and ISP infrastructure technology is composed of tightly coupled software services with the underlying customized hardware architecture. The fast growth of clou...

  10. Managing a tier-2 computer centre with a private cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-01-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI

  11. Importance for Municipalities of Infrastructure Information Systems in Turkey

    Directory of Open Access Journals (Sweden)

    Kamil KARATAS

    2017-08-01

    Full Text Available Technical infrastructures are the important development-level parameters of countries, difficult to maintain and require high-investment cost. It is required to take the advantage of information system for the better administration of technical infrastructure facilities, planning and taking effective decisions. Hence, infrastructure information systems must be built oriented to technical infrastructure (TI. In this study, Kunduracilar Street in Trabzon was selected as pilot area oriented to urban TI studies. Graphic and attribute information of the pilot area were collected. Every TI facility was arranged into the same coordinate system with different layers. Maps showing TI facilities in the pilot area and 3D view of the site were prepared on ArcGIS software.

  12. First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081940; The ATLAS collaboration; Barberis, Dario; Gallas, Elizabeth; Rybkin, Grigori; Rinaldi, Lorenzo; Aperio Bella, Ludovica; Buttinger, William

    2017-01-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF are effectively read by the software as binary objects, this class of data appears ideal for testing the proposed Run 3 conditions data infrastructure now in development. This paper describes this implementation as well as the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  13. First Use of LHC Run 3 Conditions Database Infrastructure for Auxiliary Data Files in ATLAS

    CERN Document Server

    Aperio Bella, Ludovica; The ATLAS collaboration

    2016-01-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF data is effectively read by the software as binary objects, makes this class of data ideal for testing the proposed Run 3 Conditions data infrastructure now in development. This paper will describe this implementation as well as describe the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  14. Using neural networks in software repositories

    Science.gov (United States)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  15. Crowdsourcing cloud-based software development

    CERN Document Server

    Li, Wei; Tsai, Wei-Tek; Wu, Wenjun

    2015-01-01

    This book presents the latest research on the software crowdsourcing approach to develop large and complex software in a cloud-based platform. It develops the fundamental principles, management organization and processes, and a cloud-based infrastructure to support this new software development approach. The book examines a variety of issues in software crowdsourcing processes, including software quality, costs, diversity of solutions, and the competitive nature of crowdsourcing processes. Furthermore, the book outlines a research roadmap of this emerging field, including all the key technology and management issues for the foreseeable future. Crowdsourcing, as demonstrated by Wikipedia and Facebook for online web applications, has shown promising results for a variety of applications, including healthcare, business, gold mining exploration, education, and software development. Software crowdsourcing is emerging as a promising solution to designing, developing and maintaining software. Preliminary software cr...

  16. National Computational Infrastructure for Lattice Gauge Theory

    Energy Technology Data Exchange (ETDEWEB)

    Brower, Richard C.

    2014-04-15

    SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Gauge Theory, from March 15, 2011 through March 14, 2012. The objective of this project is to construct the software needed to study quantum chromodynamics (QCD), the theory of the strong interactions of sub-atomic physics, and other strongly coupled gauge field theories anticipated to be of importance in the energy regime made accessible by the Large Hadron Collider (LHC). It builds upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API) was developed that enables lattice gauge theorists to make effective use of a wide variety of massively parallel computers. This project serves the entire USQCD Collaboration, which consists of nearly all the high energy and nuclear physicists in the United States engaged in the numerical study of QCD and related strongly interacting quantum field theories. All software developed in it is publicly available, and can be downloaded from a link on the USQCD Collaboration web site, or directly from the github repositories with entrance linke http://usqcd-software.github.io

  17. Genetic Algorithms for Agent-Based Infrastructure Interdependency Modeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    May Permann

    2007-03-01

    Today’s society relies greatly upon an array of complex national and international infrastructure networks such as transportation, electric power, telecommunication, and financial networks. This paper describes initial research combining agent-based infrastructure modeling software and genetic algorithms (GAs) to help optimize infrastructure protection and restoration decisions. This research proposes to apply GAs to the problem of infrastructure modeling and analysis in order to determine the optimum assets to restore or protect from attack or other disaster. This research is just commencing and therefore the focus of this paper is the integration of a GA optimization method with a simulation through the simulation’s agents.

  18. National Computational Infrastructure for Lattice Gauge Theory: Final Report

    International Nuclear Information System (INIS)

    Richard Brower; Norman Christ; Michael Creutz; Paul Mackenzie; John Negele; Claudio Rebbi; David Richards; Stephen Sharpe; Robert Sugar

    2006-01-01

    This is the final report of Department of Energy SciDAC Grant ''National Computational Infrastructure for Lattice Gauge Theory''. It describes the software developed under this grant, which enables the effective use of a wide variety of supercomputers for the study of lattice quantum chromodynamics (lattice QCD). It also describes the research on and development of commodity clusters optimized for the study of QCD. Finally, it provides some high lights of research enabled by the infrastructure created under this grant, as well as a full list of the papers resulting from research that made use of this infrastructure

  19. International Liability Issues for Software Quality

    National Research Council Canada - National Science Library

    Mead, Nancy

    2003-01-01

    This report focuses on international law related to cybercrime, international information security standards, and software liability issues as they relate to information security for critical infrastructure applications...

  20. PENGARUH AKSES PELAYANAN KESEHATAN, PERFORMED TREAMENT INDEX/PTI REQUIREMENT TREATMENT INDEX/RTI, TERHADAP PERILAKU ORAL HYGIENE

    Directory of Open Access Journals (Sweden)

    Niniek L. Pratiwi

    2012-11-01

    Full Text Available Background: The number of tooth decay in Indonesia based on national health survey by the Department of Health of Indonesia in 2001 found about 70 percent of the Indonesian population aged 10 years and over have experienced damage gigi. Pada age 12 years, the amount of tooth decay reaches 43.9%, age 15 year reached 37.4%, age 18 years 51.1%, aged 35-44 reached 80.1%, and the age of 65 years and over reached 96.7%. Methods: Data from dinalisis data Riskesdas, Based on the data types Oral hygiene behavior is nominal, and as the dependent variable, independent variables while are: access to health services, PTI, which has a scale RTI data are ordinal. Design analysis is the analysis of ordinal relations with Regressi. Result: results showed that there are several variables that could significantly affect oral hygiene behavior with p value = 0.000 (p < 0.05, at a 0.05 is the travel time and distance to health center, age, occupation KK, level of per capita household expenditure, PTI, and RTI. The closer the travel time to health centers the greater the percentage of tooth brushing behavior and otherwise the longer the travel time from the center of the larger health behavior brush. Influence the the accessibility of health service facilities ease significantly affect preventive efforts, the community dental health promotion. Recommendation: Needed improvements in accessibility of health care facilities, especially dental health services for remote areas, islands and borders both facilities and equipment facilities as well as dental health personnel. Distance and short takes on the health service center is a factor enabling or supporting the predisposing factors will affect the drivers as a form of ease in obtaining access to knowledge about dental health, especially in the behavior of the brush. Predisposing factors embodied in the knowledge of factors affecting reinforcing increases one's motivation toothbrushing behavior. For toothpaste

  1. The Evolution of Software in High Energy Physics

    International Nuclear Information System (INIS)

    Brun, René

    2012-01-01

    The paper reviews the evolution of the software in High Energy Physics from the time of expensive mainframes to grids and clouds systems using thousands of multi-core processors. It focuses on the key parameters or events that have shaped the current software infrastructure.

  2. A Review of Predictive Software for the Design of Community Microgrids

    Directory of Open Access Journals (Sweden)

    Mina Rahimian

    2018-01-01

    Full Text Available This paper discusses adding a spatial dimension to the design of community microgrid projects in the interest of expanding the existing discourse related to energy performance optimization measures. A multidimensional vision for designing community microgrids with higher energy performance is considered, leveraging urban form (superstructure to understand how it impacts the performance of the system’s distributed energy resources and loads (infrastructure. This vision engages the design sector in the technical conversation of developing community microgrids, leading to energy efficient designs of microgrid-connected communities well before their construction. A new generation of computational modeling and simulation tools that address this interaction are required. In order to position the research, this paper presents a survey of existing software packages, belonging to two distinct categories of modeling, simulation, and evaluation of community microgrids: the energy infrastructure modeling and the urban superstructure energy modeling. Results of this software survey identify a lack in software tools and simulation packages that simultaneously address the necessary interaction between the superstructure and infrastructure of community microgrids, given the importance of its study. Conclusions represent how a proposed experimental software prototype may fill an existing gap in current related software packages.

  3. Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility

    Science.gov (United States)

    Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer

    2009-01-01

    Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits

  4. Utilizing Response to Intervention (RtI) as a Means of Studying Capacity Building and Motivation of Staff by School Leadership Teams

    Science.gov (United States)

    Mahoney, Brian J.

    2013-01-01

    This research study explored the concept of capacity building and motivation of staff by school leadership teams in the successful development and implementation of educational initiatives, specifically Response to Intervention (RtI). A great deal of scholarship has addressed leadership and its effect on motivation, but few studies have…

  5. A multi-infrastructure gateway for virtual drug screening

    NARCIS (Netherlands)

    Jaghoori, Mohammad Mahdi; van Altena, Allard J.; Bleijlevens, Boris; Ramezani, Sara; Font, Juan Luis; Olabarriaga, Silvia D.

    2015-01-01

    In computer-aided drug design, software tools are used to narrow down possible drug candidates, thereby reducing the amount of expensive in vitro research, by a process called virtual screening. This process includes large computations that require advanced computing infrastructure; however, using

  6. DLVM: A modern compiler infrastructure for deep learning systems

    OpenAIRE

    Wei, Richard; Schwartz, Lane; Adve, Vikram

    2017-01-01

    Deep learning software demands reliability and performance. However, many of the existing deep learning frameworks are software libraries that act as an unsafe DSL in Python and a computation graph interpreter. We present DLVM, a design and implementation of a compiler infrastructure with a linear algebra intermediate representation, algorithmic differentiation by adjoint code generation, domain-specific optimizations and a code generator targeting GPU via LLVM. Designed as a modern compiler ...

  7. The RTI Daily Planning Book, K-6: Tools and Strategies for Collecting and Assessing Reading Data & Targeted Follow-Up Instruction

    Science.gov (United States)

    Owocki, Gretchen

    2010-01-01

    Children's needs differ so vastly that a single program designed to support numerous students can only do so much. More than anything else, students need to use professional expertise to unravel their needs and to plan instruction that is directly responsive. This book makes exemplary RTI possible in every reading classroom. The author gives you…

  8. National Computational Infrastructure for Lattice Gauge Theory: Final report

    International Nuclear Information System (INIS)

    Reed, Daniel A.

    2008-01-01

    In this document we describe work done under the SciDAC-1 Project National Computerational Infrastructure for Lattice Gauge Theory. The objective of this project was to construct the computational infrastructure needed to study quantum chromodynamics (QCD). Nearly all high energy and nuclear physicists in the United States working on the numerical study of QCD are involved in the project, as are Brookhaven National Laboratory (BNL), Fermi National Accelerator Laboratory (FNAL), and Thomas Jefferson National Accelerator Facility (JLab). A list of the senior participants is given in Appendix A.2. The project includes the development of community software for the effective use of the terascale computers, and the research and development of commodity clusters optimized for the study of QCD. The software developed as part of this effort is publicly available, and is being widely used by physicists in the United States and abroad. The prototype clusters built with SciDAC-1 fund have been used to test the software, and are available to lattice gauge theorists in the United States on a peer reviewed basis

  9. Dynamic Collaboration Infrastructure for Hydrologic Science

    Science.gov (United States)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the

  10. Next generation ATCA control infrastructure for the CMS Phase-2 upgrades

    CERN Document Server

    Smith, Wesley; Svetek, Aleš; Tikalsky, Jes; Fobes, Robert; Dasu, Sridhara; Smith, Wesley; Vicente, Marcelo

    2017-01-01

    A next generation control infrastructure to be used in Advanced TCA (ATCA) blades at CMS experiment is being designed and tested. Several ATCA systems are being prepared for the High-Luminosity LHC (HL-LHC) and will be installed at CMS during technical stops. The next generation control infrastructure will provide all the necessary hardware, firmware and software required in these systems, decreasing development time and increasing flexibility. The complete infrastructure includes an Intelligent Platform Management Controller (IPMC), a Module Management Controller (MMC) and an Embedded Linux Mezzanine (ELM) processing card.

  11. Toward a digital library strategy for a National Information Infrastructure

    Science.gov (United States)

    Coyne, Robert A.; Hulen, Harry

    1993-01-01

    Bills currently before the House and Senate would give support to the development of a National Information Infrastructure, in which digital libraries and storage systems would be an important part. A simple model is offered to show the relationship of storage systems, software, and standards to the overall information infrastructure. Some elements of a national strategy for digital libraries are proposed, based on the mission of the nonprofit National Storage System Foundation.

  12. A New Quantitative Method for the Non-Invasive Documentation of Morphological Damage in Paintings Using RTI Surface Normals

    Directory of Open Access Journals (Sweden)

    Marcello Manfredi

    2014-07-01

    Full Text Available In this paper we propose a reliable surface imaging method for the non-invasive detection of morphological changes in paintings. Usually, the evaluation and quantification of changes and defects results mostly from an optical and subjective assessment, through the comparison of the previous and subsequent state of conservation and by means of condition reports. Using quantitative Reflectance Transformation Imaging (RTI we obtain detailed information on the geometry and morphology of the painting surface with a fast, precise and non-invasive method. Accurate and quantitative measurements of deterioration were acquired after the painting experienced artificial damage. Morphological changes were documented using normal vector images while the intensity map succeeded in highlighting, quantifying and describing the physical changes. We estimate that the technique can detect a morphological damage slightly smaller than 0.3 mm, which would be difficult to detect with the eye, considering the painting size. This non-invasive tool could be very useful, for example, to examine paintings and artwork before they travel on loan or during a restoration. The method lends itself to automated analysis of large images and datasets. Quantitative RTI thus eases the transition of extending human vision into the realm of measuring change over time.

  13. Effects of hypothetical improvised nuclear detonation on the electrical infrastructure

    International Nuclear Information System (INIS)

    Barrett, Christopher L.; Eubank, Stephen; Evrenosoglu, C. Yaman; Marathe, Achla; Marathe, Madhav V.; Phadke, Arun; Thorp, James; Vullikanti, Anil

    2013-01-01

    We study the impacts of a hypothetical improvised nuclear detonation (IND) on the electrical infrastructure and its cascading effects on other urban inter-dependent infrastructures of a major metropolitan area in the US. We synthesize open source information, expert knowledge, commercial software and Google Earth data to derive a realistic electrical transmission and distribution network spanning the region. A dynamic analysis of the geo-located grid is carried out to determine the cause of malfunction of components, and their short-term and long-term effect on the stability of the grid. Finally a detailed estimate of the cost of damage to the major components of the infrastructure is provided.

  14. Effects of hypothetical improvised nuclear detonation on the electrical infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, Christopher L.; Eubank, Stephen; Evrenosoglu, C. Yaman; Marathe, Achla; Marathe, Madhav V.; Phadke, Arun; Thorp, James; Vullikanti, Anil [Virginia Tech, Blacksburg, VA (United States). Network Dynamics and Simulation Science Lab.

    2013-07-01

    We study the impacts of a hypothetical improvised nuclear detonation (IND) on the electrical infrastructure and its cascading effects on other urban inter-dependent infrastructures of a major metropolitan area in the US. We synthesize open source information, expert knowledge, commercial software and Google Earth data to derive a realistic electrical transmission and distribution network spanning the region. A dynamic analysis of the geo-located grid is carried out to determine the cause of malfunction of components, and their short-term and long-term effect on the stability of the grid. Finally a detailed estimate of the cost of damage to the major components of the infrastructure is provided.

  15. Research Data Management - Building Service Infrastructure and Capacity

    KAUST Repository

    Baessa, Mohamed A.

    2018-03-07

    Research libraries support the missions of their institutions by facilitating the flow of scholarly information to and from the institutions’ researchers. As research in many disciplines becomes more data and software intensive, libraries are finding that services and infrastructure developed to preserve and provide access to textual documents are insufficient to meet their institutions’ needs. In response, libraries around the world have begun assessing the data management needs of their researchers, and expanding their capacity to meet the needs that they find. This discussion panel will discuss approaches to building research data management services and infrastructure in academic libraries. Panelists will discuss international efforts to support research data management, while highlighting the different models that universities have adopted to provide a mix of services and infrastructure tailored to their local needs.

  16. CMS software deployment on OSG

    International Nuclear Information System (INIS)

    Kim, B; Avery, P; Thomas, M; Wuerthwein, F

    2008-01-01

    A set of software deployment tools has been developed for the installation, verification, and removal of a CMS software release. The tools that are mainly targeted for the deployment on the OSG have the features of instant release deployment, corrective resubmission of the initial installation job, and an independent web-based deployment portal with Grid security infrastructure login mechanism. We have been deploying over 500 installations and found the tools are reliable and adaptable to cope with problems with changes in the Grid computing environment and the software releases. We present the design of the tools, statistics that we gathered during the operation of the tools, and our experience with the CMS software deployment on the OSG Grid computing environment

  17. CMS software deployment on OSG

    Energy Technology Data Exchange (ETDEWEB)

    Kim, B; Avery, P [University of Florida, Gainesville, FL 32611 (United States); Thomas, M [California Institute of Technology, Pasadena, CA 91125 (United States); Wuerthwein, F [University of California at San Diego, La Jolla, CA 92093 (United States)], E-mail: bockjoo@phys.ufl.edu, E-mail: thomas@hep.caltech.edu, E-mail: avery@phys.ufl.edu, E-mail: fkw@fnal.gov

    2008-07-15

    A set of software deployment tools has been developed for the installation, verification, and removal of a CMS software release. The tools that are mainly targeted for the deployment on the OSG have the features of instant release deployment, corrective resubmission of the initial installation job, and an independent web-based deployment portal with Grid security infrastructure login mechanism. We have been deploying over 500 installations and found the tools are reliable and adaptable to cope with problems with changes in the Grid computing environment and the software releases. We present the design of the tools, statistics that we gathered during the operation of the tools, and our experience with the CMS software deployment on the OSG Grid computing environment.

  18. Information infrastructure development in NRU «MPEI»

    Directory of Open Access Journals (Sweden)

    E. G. Gridina

    2016-01-01

    Full Text Available The article describes the work on support and development of information infrastructure NRU «MPEI». Information infrastructure have different approaches to the defi nition. The authors defi ne the information infrastructure as a set of basic information services, computing, storage and data transmission systems that provide user access to information resources. New conditions dictate new approaches to building the education system in general and the educational process in each educational institution. NRU «MPEI» working to create a modern information infrastructure, including automated control systems, information resources and services, modular systems disciplines. This article describes the requirements for a modern information infrastructure of the NRU «MPEI», that provides students and teachers with the necessary services. Information infrastructure includes a set of software and hardware to ensure interaction between the participants of the educational process. All services and NRU «MPEI» system included in the unifi ed information educational environment (UIEE. Architecture UIEE NRU «MPEI» is displayed in the article. UIEE NRU «MPEI» is deployed on the basis of information network NRU «MPEI» and enables a comprehensive optimization of university management in various areas. Information and Computing Center supporting information and computer network NRU «MPEI», bought more than 4800 licenses in 43 different license versions of the software manufacturers. The server segment information network NRU «MPEI» contains a complex infrastructure and application servers for processing and storing information.The segment there are 20 high-performance server and storage system capacity of over 30 TB. In the server segment deployed complex systems to meet the needs in the various fi elds of activity NRU «MPEI», and the educational system to support the economic , scientifi c and human complex. Currently, ICC also pays great

  19. Continuous software quality analysis for the ATLAS experiment

    CERN Document Server

    Washbrook, Andrew; The ATLAS collaboration

    2017-01-01

    The software for the ATLAS experiment on the Large Hadron Collider at CERN has evolved over many years to meet the demands of Monte Carlo simulation, particle detector reconstruction and data analysis. At present over 3.8 million lines of C++ code (and close to 6 million total lines of code) are maintained by an active worldwide developer community. In order to run the experiment software efficiently at hundreds of computing centres it is essential to maintain a high level of software quality standards. The methods proposed to improve software quality practices by incorporating checks into the new ATLAS software build infrastructure.

  20. The Czech National Grid Infrastructure

    Science.gov (United States)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  1. A COMBINATION OF GEOSPATIAL AND CLINICAL ANALYSIS IN PREDICTING DISABILITY OUTCOME AFTER ROAD TRAFFIC INJURY (RTI IN A DISTRICT IN MALAYSIA

    Directory of Open Access Journals (Sweden)

    R. Nik Hisamuddin

    2016-09-01

    Full Text Available This was a Prospective Cohort Study commencing from July 2011 until June 2013 involving all injuries related to motor vehicle crashes (MVC attended Emergency Departments (ED of two tertiary centers in a district in Malaysia. Selected attributes were geospatially analyzed by using ARCGIS (by ESRI software version 10.1 licensed to the institution and Google Map free software and multiple logistic regression was performed by using SPSS version 22.0. A total of 439 cases were recruited. The mean age (SD of the MVC victims was 26.04 years (s.d 15.26. Male comprised of 302 (71.7% of the cases. Motorcyclists were the commonest type of victims involved [351(80.0%]. Hotspot MVC locations occurred at certain intersections and on roads within borough of Kenali and Binjai. The number of severely injured and polytrauma are mostly on the road network within speed limit of 60 km/hour. A person with an increase in ISS of one score had a 37 % higher odd to have disability at hospital discharge (95% CI: 1.253, 1.499, p-value < 0.001. Pediatric age group (less than 19 years of age had 52.1% lesser odds to have disability at discharge from hospital (95% CI: 0.258, 0.889, p-value < 0.001 and patients who underwent operation for definitive management had 4.14 times odds to have disability at discharge from hospital (95% CI: 1.681, 10.218, p-value = 0.002. Overall this study has proven that GIS with a combination of traditional statistical analysis is still a powerful tool in road traffic injury (RTI related research.

  2. PRINCIPLES OF MODERN UNIVERSITY "ACADEMIC CLOUD" FORMATION BASED ON OPEN SOFTWARE PLATFORM

    Directory of Open Access Journals (Sweden)

    Olena H. Hlazunova

    2014-09-01

    Full Text Available In the article approaches to the use of cloud technology in teaching of higher education students are analyzed. The essence of the concept of "academic cloud" and its structural elements are justified. The model of academic clouds of the modern university, which operates on the basis of open software platforms, are proposed. Examples of functional software and platforms, that provide the needs of students in e-learning resources, are given. The models of deployment Cloud-oriented environment in higher education: private cloud, infrastructure as a service and platform as a service, are analyzed. The comparison of the cost of deployment "academic cloud" based on its own infrastructure of the institution and lease infrastructure vendor are substantiated.

  3. Cloud infrastructure for providing tools as a service: quality attributes and potential solutions

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Ali Babar, Muhammad

    2012-01-01

    Cloud computing is being increasingly adopted in various domains for providing on-demand infrastructure and Software as a service (SaaS) by leveraging the utility computing model and virtualization technologies. One of the domains, where cloud computing is expected to gain huge traction is Global...... Software Development (GSD) that has emerged as a popular software development model. Despite several promised benefits, GSD is characterized by not only technical issues but also the complexities associated with its processes. One of the key challenges of GSD is to provide appropriate tools more...... efficiently and cost-effectively. Moreover, variations in tools available/used by different GSD team members can also pose challenges. We assert that providing Tools as a Service (TaaS) to GSD teams through a cloud-based infrastructure can be a promising solution to address the tools related challenges in GSD...

  4. The Use of Piecewise Growth Models to Estimate Learning Trajectories and RtI Instructional Effects in a Comparative Interrupted Time-Series Design

    Science.gov (United States)

    Zvoch, Keith

    2016-01-01

    Piecewise growth models (PGMs) were used to estimate and model changes in the preliteracy skill development of kindergartners in a moderately sized school district in the Pacific Northwest. PGMs were applied to interrupted time-series (ITS) data that arose within the context of a response-to-intervention (RtI) instructional framework. During the…

  5. Software as a service approach to sensor simulation software deployment

    Science.gov (United States)

    Webster, Steven; Miller, Gordon; Mayott, Gregory

    2012-05-01

    Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.

  6. Infrastructure Joint Venture Projects in Malaysia: A Preliminary Study

    Science.gov (United States)

    Romeli, Norsyakilah; Muhamad Halil, Faridah; Ismail, Faridah; Sufian Hasim, Muhammad

    2018-03-01

    As many developed country practise, the function of the infrastructure is to connect the each region of Malaysia holistically and infrastructure is an investment network projects such as transportation water and sewerage, power, communication and irrigations system. Hence, a billions allocations of government income reserved for the sake of the infrastructure development. Towards a successful infrastructure development, a joint venture approach has been promotes by 2016 in one of the government thrust in Construction Industry Transformation Plan which encourage the internationalisation among contractors. However, there is depletion in information on the actual practise of the infrastructure joint venture projects in Malaysia. Therefore, this study attempt to explore the real application of the joint venture in Malaysian infrastructure projects. Using the questionnaire survey, a set of survey question distributed to the targeted respondents. The survey contained three section which the sections are respondent details, organizations background and project capital in infrastructure joint venture project. The results recorded and analyse using SPSS software. The contractors stated that they have implemented the joint venture practice with mostly the client with the usual construction period of the infrastructure project are more than 5 years. Other than that, the study indicates that there are problems in the joint venture project in the perspective of the project capital and the railway infrastructure should be given a highlights in future study due to its high significant in term of cost and technical issues.

  7. Infrastructure Joint Venture Projects in Malaysia: A Preliminary Study

    Directory of Open Access Journals (Sweden)

    Romeli Norsyakilah

    2018-01-01

    Full Text Available As many developed country practise, the function of the infrastructure is to connect the each region of Malaysia holistically and infrastructure is an investment network projects such as transportation water and sewerage, power, communication and irrigations system. Hence, a billions allocations of government income reserved for the sake of the infrastructure development. Towards a successful infrastructure development, a joint venture approach has been promotes by 2016 in one of the government thrust in Construction Industry Transformation Plan which encourage the internationalisation among contractors. However, there is depletion in information on the actual practise of the infrastructure joint venture projects in Malaysia. Therefore, this study attempt to explore the real application of the joint venture in Malaysian infrastructure projects. Using the questionnaire survey, a set of survey question distributed to the targeted respondents. The survey contained three section which the sections are respondent details, organizations background and project capital in infrastructure joint venture project. The results recorded and analyse using SPSS software. The contractors stated that they have implemented the joint venture practice with mostly the client with the usual construction period of the infrastructure project are more than 5 years. Other than that, the study indicates that there are problems in the joint venture project in the perspective of the project capital and the railway infrastructure should be given a highlights in future study due to its high significant in term of cost and technical issues.

  8. Validation of the quality of life-radiation therapy instrument (QOL-RTI) in patients receiving definitive radiation therapy for locally advanced prostate cancer

    International Nuclear Information System (INIS)

    Gwede, Clement; Friedland, Jay L.; Johnson, Darlene J.; Casey, Linda; Cantor, Alan; Sauder, Bonnie; Beres, Kathleen L.

    1996-01-01

    Purpose/Objective: The incidence of prostate cancer has tripled over the last 10 years, doubled over the last four years and continues to increase. A common method of treating prostate cancer is with external beam radiotherapy with or without hormones. Accurate and comprehensive documentation through prospective studies with long term follow-up is necessary to reduce the negative impact of treatment on a patient's quality of life. While it is increasingly recognized that radiation therapy treatment for prostate cancer may result in permanent alteration of the patient's quality of life, the extent and timing of this change in quality of life has not been adequately investigated in a comprehensive and prospective manner. Furthermore, there are limited instruments developed for use with patients undergoing definitive radiotherapy. The purpose of this paper is to report on the validation of the Quality of Life Radiation Therapy Instrument (QOL-RTI), a 24-item visual analogue general quality of life tool developed for use with patients receiving radiotherapy. Materials and Methods: Health related quality of life was assessed in a prospective study of 62 patients treated with either combined hormonal therapy (HT) plus external beam radiotherapy (EBRT) or EBRT alone for locally advanced prostate cancer. Quality life was measured prospectively before, during, and after radiation therapy. Results: The estimated reliability of the subscales was assessed with coefficient alpha which ranged from 0.57 to 0.68. Internal consistency was calculated using initial questionnaires for the entire sample, yielding a Cronbach's alpha of 0.82. Test-retest produced a correlation coefficient of 0.75 (p<0.0001) [n=60]. Construct validity was assessed by a repeated measures design to look for time effect, group effect, group and time interaction effect. We examined quality of life total scores, subscale total scores and performance status scores for patients who were treated with HT+ EBRT and

  9. The costs of avoiding environmental impacts from shale-gas surface infrastructure.

    Science.gov (United States)

    Milt, Austin W; Gagnolet, Tamara D; Armsworth, Paul R

    2016-12-01

    Growing energy demand has increased the need to manage conflicts between energy production and the environment. As an example, shale-gas extraction requires substantial surface infrastructure, which fragments habitats, erodes soils, degrades freshwater systems, and displaces rare species. Strategic planning of shale-gas infrastructure can reduce trade-offs between economic and environmental objectives, but the specific nature of these trade-offs is not known. We estimated the cost of avoiding impacts from land-use change on forests, wetlands, rare species, and streams from shale-energy development within leaseholds. We created software for optimally siting shale-gas surface infrastructure to minimize its environmental impacts at reasonable construction cost. We visually assessed sites before infrastructure optimization to test whether such inspection could be used to predict whether impacts could be avoided at the site. On average, up to 38% of aggregate environmental impacts of infrastructure could be avoided for 20% greater development costs by spatially optimizing infrastructure. However, we found trade-offs between environmental impacts and costs among sites. In visual inspections, we often distinguished between sites that could be developed to avoid impacts at relatively low cost (29%) and those that could not (20%). Reductions in a metric of aggregate environmental impact could be largely attributed to potential displacement of rare species, sedimentation, and forest fragmentation. Planners and regulators can estimate and use heterogeneous trade-offs among development sites to create industry-wide improvements in environmental performance and do so at reasonable costs by, for example, leveraging low-cost avoidance of impacts at some sites to offset others. This could require substantial effort, but the results and software we provide can facilitate the process. © 2016 Society for Conservation Biology.

  10. Simplifying the Development, Use and Sustainability of HPC Software

    Directory of Open Access Journals (Sweden)

    Jeremy Cohen

    2014-07-01

    Full Text Available Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS cloud computing become more widely accepted for high-performance computing (HPC, scientists require more support from computer scientists and resource providers to develop efficient code that offers long-term sustainability and makes optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. In this updated version of our submission to the WSSSPE13 workshop at SuperComputing 2013 we set out our approach to simplifying access to HPC applications and resources for end-users through the use of flexible and interchangeable software components and associated high-level functional-style operations. We believe this approach can support sustainability of scientific software and help to widen access to it.

  11. INFORMATION INFRASTRUCTURE OF THE EDUCATIONAL ENVIRONMENT WITH VIRTUAL MACHINE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Artem D. Beresnev

    2014-09-01

    Full Text Available Subject of research. Information infrastructure for the training environment with application of technology of virtual computers for small pedagogical systems (separate classes, author's courses is created and investigated. Research technique. The life cycle model of information infrastructure for small pedagogical systems with usage of virtual computers in ARIS methodology is constructed. The technique of information infrastructure formation with virtual computers on the basis of process approach is offered. The model of an event chain in combination with the environment chart is used as the basic model. For each function of the event chain the necessary set of means of information and program support is defined. Technique application is illustrated on the example of information infrastructure design for the educational environment taking into account specific character of small pedagogical systems. Advantages of the designed information infrastructure are: the maximum usage of open or free components; the usage of standard protocols (mainly, HTTP and HTTPS; the maximum portability (application servers can be started up on any of widespread operating systems; uniform interface to management of various virtualization platforms, possibility of inventory of contents of the virtual computer without its start, flexible inventory management of the virtual computer by means of adjusted chains of rules. Approbation. Approbation of obtained results was carried out on the basis of training center "Institute of Informatics and Computer Facilities" (Tallinn, Estonia. Technique application within the course "Computer and Software Usage" gave the possibility to get half as much the number of refusals for components of the information infrastructure demanding intervention of the technical specialist, and also the time for elimination of such malfunctions. Besides, the pupils who have got broader experience with computer and software, showed better results

  12. Intelligent monitoring, control, and security of critical infrastructure systems

    CERN Document Server

    Polycarpou, Marios

    2015-01-01

    This book describes the challenges that critical infrastructure systems face, and presents state of the art solutions to address them. How can we design intelligent systems or intelligent agents that can make appropriate real-time decisions in the management of such large-scale, complex systems? What are the primary challenges for critical infrastructure systems? The book also provides readers with the relevant information to recognize how important infrastructures are, and their role in connection with a society’s economy, security and prosperity. It goes on to describe state-of-the-art solutions to address these points, including new methodologies and instrumentation tools (e.g. embedded software and intelligent algorithms) for transforming and optimizing target infrastructures. The book is the most comprehensive resource to date for professionals in both the private and public sectors, while also offering an essential guide for students and researchers in the areas of modeling and analysis of critical in...

  13. An Introduction to Flight Software Development: FSW Today, FSW 2010

    Science.gov (United States)

    Gouvela, John

    2004-01-01

    Experience and knowledge gained from ongoing maintenance of Space Shuttle Flight Software and new development projects including Cockpit Avionics Upgrade are applied to projected needs of the National Space Exploration Vision through Spiral 2. Lessons learned from these current activities are applied to create a sustainable, reliable model for development of critical software to support Project Constellation. This presentation introduces the technologies, methodologies, and infrastructure needed to produce and sustain high quality software. It will propose what is needed to support a Vision for Space Exploration that places demands on the innovation and productivity needed to support future space exploration. The technologies in use today within FSW development include tools that provide requirements tracking, integrated change management, modeling and simulation software. Specific challenges that have been met include the introduction and integration of Commercial Off the Shelf (COTS) Real Time Operating System for critical functions. Though technology prediction has proved to be imprecise, Project Constellation requirements will need continued integration of new technology with evolving methodologies and changing project infrastructure. Targets for continued technology investment are integrated health monitoring and management, self healing software, standard payload interfaces, autonomous operation, and improvements in training. Emulation of the target hardware will also allow significant streamlining of development and testing. The methodologies in use today for FSW development are object oriented UML design, iterative development using independent components, as well as rapid prototyping . In addition, Lean Six Sigma and CMMI play a critical role in the quality and efficiency of the workforce processes. Over the next six years, we expect these methodologies to merge with other improvements into a consolidated office culture with all processes being guided by

  14. Zoneminder as ‘Software as a Service’ and Load Balancing of Video Surveillance Requests

    DEFF Research Database (Denmark)

    Deshmukh, Aaradhana A.; Mihovska, Albena D.; Prasad, Ramjee

    2012-01-01

    Cloud computing is evolving as a key computing platform for sharing resources that include infrastructures, softwares, applications, and business processes. Virtualization is a core technology for enabling cloud resource sharing. Software as a Service (SaaS) on the cloud platform provides software...... application vendors a Web based delivery model to serve large amount of clients with multi-tenancy based infrastructure and application sharing architecture so as to get great benefit from the economy of scale. The emergence of the Software-as-a-Service (SaaS) business model has attracted great attentions...... from both researchers and practitioners. SaaS vendors deliver on demand information processing services to users, and thus offer computing utility rather than the standalone software itself. This paper proposes a deployment of an open source video surveillance application named Zoneminder...

  15. Worldwide collaborative efforts in plasma control software development

    International Nuclear Information System (INIS)

    Penaflor, B.G.; Ferron, J.R.; Walker, M.L.; Humphreys, D.A.; Leuer, J.A.; Piglowski, D.A.; Johnson, R.D.; Xiao, B.J.; Hahn, S.H.; Gates, D.A.

    2008-01-01

    This presentation will describe the DIII-D collaborations with various tokamak experiments throughout the world which have adapted custom versions of the DIII-D plasma control system (PCS) software for their own use. Originally developed by General Atomics for use on the DIII-D tokamak, the PCS has been successfully installed and used for the NSTX experiment in Princeton, the MAST experiment in Culham UK, the EAST experiment in China, and the Pegasus experiment in the University of Wisconsin. In addition to these sites, a version of the PCS is currently being developed for use by the KSTAR tokamak in Korea. A well-defined and robust PCS software infrastructure has been developed to provide a common foundation for implementing the real-time data acquisition and feedback control codes. The PCS infrastructure provides a flexible framework that has allowed the PCS to be easily adapted to fulfill the unique needs of each site. The software has also demonstrated great flexibility in allowing for different computing, data acquisition and real-time networking hardware to be used. A description of the current PCS software architecture will be given along with experiences in developing and supporting the various PCS installations throughout the world

  16. A General Purpose High Performance Linux Installation Infrastructure

    International Nuclear Information System (INIS)

    Wachsmann, Alf

    2002-01-01

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then uses kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation

  17. Safety-Critical Partitioned Software Architecture: A Partitioned Software Architecture for Robotic

    Science.gov (United States)

    Horvath, Greg; Chung, Seung H.; Cilloniz-Bicchi, Ferner

    2011-01-01

    The flight software on virtually every mission currently managed by JPL has several major flaws that make it vulnerable to potentially fatal software defects. Many of these problems can be addressed by recently developed partitioned operating systems (OS). JPL has avoided adopting a partitioned operating system on its flight missions, primarily because doing so would require significant changes in flight software design, and the risks associated with changes of that magnitude cannot be accepted by an active flight project. The choice of a partitioned OS can have a dramatic effect on the overall system and software architecture, allowing for realization of benefits far beyond the concerns typically associated with the choice of OS. Specifically, we believe that a partitioned operating system, when coupled with an appropriate architecture, can provide a strong infrastructure for developing systems for which reusability, modifiability, testability, and reliability are essential qualities. By adopting a partitioned OS, projects can gain benefits throughout the entire development lifecycle, from requirements and design, all the way to implementation, testing, and operations.

  18. A Cloud-based Infrastructure and Architecture for Environmental System Research

    Science.gov (United States)

    Wang, D.; Wei, Y.; Shankar, M.; Quigley, J.; Wilson, B. E.

    2016-12-01

    The present availability of high-capacity networks, low-cost computers and storage devices, and the widespread adoption of hardware virtualization and service-oriented architecture provide a great opportunity to enable data and computing infrastructure sharing between closely related research activities. By taking advantage of these approaches, along with the world-class high computing and data infrastructure located at Oak Ridge National Laboratory, a cloud-based infrastructure and architecture has been developed to efficiently deliver essential data and informatics service and utilities to the environmental system research community, and will provide unique capabilities that allows terrestrial ecosystem research projects to share their software utilities (tools), data and even data submission workflow in a straightforward fashion. The infrastructure will minimize large disruptions from current project-based data submission workflows for better acceptances from existing projects, since many ecosystem research projects already have their own requirements or preferences for data submission and collection. The infrastructure will eliminate scalability problems with current project silos by provide unified data services and infrastructure. The Infrastructure consists of two key components (1) a collection of configurable virtual computing environments and user management systems that expedite data submission and collection from environmental system research community, and (2) scalable data management services and system, originated and development by ORNL data centers.

  19. Seeding the cloud: Financial bootstrapping in the computer software sector

    OpenAIRE

    Mac An Bhaird, Ciarán; Lynn, Theo

    2015-01-01

    This study investigates resourcing of computer software companies that have adopted cloud computing for the development and delivery of application software. Use of this innovative technology potentially impacts firm financing because the initial infrastructure investment requirement is much lower than for packaged software, lead time to market is shorter, and cloud computing supports instant scalability. We test these predictions by conducting in-depth interviews with founders of 18 independ...

  20. Open Source Software The Challenge Ahead

    CERN Multimedia

    CERN. Geneva

    2007-01-01

    The open source community has done amazingly well in terms of challenging the historical epicenter of computing - the supercomputer and data center - and driving change there. Linux now represents a healthy and growing share of infrastructure in large organisations globally. Apache and other infrastructural components have established the new de facto standard for software in the back office: freedom. It would be easy to declare victory. But the real challenge lies ahead - taking free software to the mass market, to your grandparents, to your nieces and nephews, to your friends. This is the next wave, and if we are to be successful we need to articulate the audacious goals clearly and loudly - because that's how the community process works best. Speaker Bio: Mark Shuttleworth founded the Ubuntu Project in early 2004. Ubuntu is an enterprise Linux distribution that is freely available worldwide and has both desktop and enterprise server editions. Mark studied finance and information technology at the Universit...

  1. Software architecture standard for simulation virtual machine, version 2.0

    Science.gov (United States)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  2. Workshop on Software Development Tools for Petascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Vetter, Jeffrey [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Georgia Inst. of Technology, Atlanta, GA (United States)

    2007-08-01

    Petascale computing systems will soon be available to the DOE science community. Recent studies in the productivity of HPC platforms point to better software environments as a key enabler to science on these systems. To prepare for the deployment and productive use of these petascale platforms, the DOE science and general HPC community must have the software development tools, such as performance analyzers and debuggers that meet application requirements for scalability, functionality, reliability, and ease of use. In this report, we identify and prioritize the research opportunities in the area of software development tools for high performance computing. To facilitate this effort, DOE hosted a group of 55 leading international experts in this area at the Software Development Tools for PetaScale Computing (SDTPC) Workshop, which was held in Washington, D.C. on August 1 and 2, 2007. Software development tools serve as an important interface between the application teams and the target HPC architectures. Broadly speaking, these roles can be decomposed into three categories: performance tools, correctness tools, and development environments. Accordingly, this SDTPC report has four technical thrusts: performance tools, correctness tools, development environment infrastructures, and scalable tool infrastructures. The last thrust primarily targets tool developers per se, rather than end users. Finally, this report identifies non-technical strategic challenges that impact most tool development. The organizing committee emphasizes that many critical areas are outside the scope of this charter; these important areas include system software, compilers, and I/O.

  3. Technical infrastructure monitoring from the CCC

    CERN Document Server

    Stowisek, J; Suwalska, A; CERN. Geneva. TS Department

    2005-01-01

    In the summer of 2005, the Technical Infrastructure Monitoring (TIM) system will replace the Technical Data Server (TDS) as the monitoring system of CERN’s technical services. Whereas the TDS was designed for the LEP, TIM will have to cope with the much more extensive monitoring needs of the LHC era. To cater for this, the new system has been built on industry-standard hardware and software components, using Java 2 Enterprise Edition (J2EE) technology to create a highly available, reliable, scalable and flexible control system. A first version of TIM providing the essential functionality will be deployed in the MCR in June 2005. Additional functionality and more sophisticated tools for system maintenance will be ready before the start-up of the LHC in 2007, when CERN’s technical infrastructure will be monitored from the future CERN Control Centre.

  4. Macro Photography for Reflectance Transformation Imaging: A Practical Guide to the Highlights Method

    Directory of Open Access Journals (Sweden)

    Antonino Cosentino

    2014-11-01

    Full Text Available Reflectance Transformation Imaging (RTI is increasingly being used for art documentation and analysis and it can be successful also for the examination of features on the order of hundreds of microns. This paper evaluates some macro scale photography methods specifically for RTI employing the Highlights method for documenting sub-millimeter details. This RTI technique consists in including one reflective sphere in the scene photographed so that the processing software can calculate for each photo the direction of the light source from its reflection on the sphere. RTI documentation can be performed also with an RTI dome, but the Highlights method is preferred because is more mobile and more affordable. This technique is demonstrated in the documentation of some prints ranging from the XV to the XX century from to the Ingels collection in Sweden. The images are here examined and discussed, showing the application of macro RTI for identifying features of prints.

  5. Securing the United States' power infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Happenny, Sean F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-08-01

    The United States’ power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power distribution networks utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Demonstrating security in embedded systems is another research area PNNL is tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the networks protecting them are becoming easier to breach. Providing a virtual power substation network to each student team at the National Collegiate Cyber Defense Competition, thereby supporting the education of future cyber security professionals, is another way PNNL is helping to strengthen the security of the nation’s power infrastructure.

  6. Designing Cloud Infrastructure for Big Data in E-government

    Directory of Open Access Journals (Sweden)

    Jelena Šuh

    2015-03-01

    Full Text Available The development of new information services and technologies, especially in domains of mobile communications, Internet of things, and social media, has led to appearance of the large quantities of unstructured data. The pervasive computing also affects the e-government systems, where big data emerges and cannot be processed and analyzed in a traditional manner due to its complexity, heterogeneity and size. The subject of this paper is the design of the cloud infrastructure for big data storage and processing in e-government. The goal is to analyze the potential of cloud computing for big data infrastructure, and propose a model for effective storing, processing and analyzing big data in e-government. The paper provides an overview of current relevant concepts related to cloud infrastructure design that should provide support for big data. The second part of the paper gives a model of the cloud infrastructure based on the concepts of software defined networks and multi-tenancy. The final goal is to support projects in the field of big data in e-government

  7. BIM cost analysis of transport infrastructure projects

    Science.gov (United States)

    Volkov, Andrey; Chelyshkov, Pavel; Grossman, Y.; Khromenkova, A.

    2017-10-01

    The article describes the method of analysis of the energy costs of transport infrastructure objects using BIM software. The paper consideres several options of orientation of a building using SketchUp and IES VE software programs. These options allow to choose the best direction of the building facades. Particular attention is given to a distribution of a temperature field in a cross-section of the wall according to the calculation made in the ELCUT software. The issues related to calculation of solar radiation penetration into a building and selection of translucent structures are considered in the paper. The article presents data on building codes relating to the transport sector, on the basis of which the calculations were made. The author emphasizes that BIM-programs should be implemented and used in order to optimize a thermal behavior of a building and increase its energy efficiency using climatic data.

  8. Methodologies and applications for critical infrastructure protection: State-of-the-art

    International Nuclear Information System (INIS)

    Yusta, Jose M.; Correa, Gabriel J.; Lacal-Arantegui, Roberto

    2011-01-01

    This work provides an update of the state-of-the-art on energy security relating to critical infrastructure protection. For this purpose, this survey is based upon the conceptual view of OECD countries, and specifically in accordance with EU Directive 114/08/EC on the identification and designation of European critical infrastructures, and on the 2009 US National Infrastructure Protection Plan. The review discusses the different definitions of energy security, critical infrastructure and key resources, and shows some of the experie'nces in countries considered as international reference on the subject, including some information-sharing issues. In addition, the paper carries out a complete review of current methodologies, software applications and modelling techniques around critical infrastructure protection in accordance with their functionality in a risk management framework. The study of threats and vulnerabilities in critical infrastructure systems shows two important trends in methodologies and modelling. A first trend relates to the identification of methods, techniques, tools and diagrams to describe the current state of infrastructure. The other trend accomplishes a dynamic behaviour of the infrastructure systems by means of simulation techniques including systems dynamics, Monte Carlo simulation, multi-agent systems, etc. - Highlights: → We examine critical infrastructure protection experiences, systems and applications. → Some international experiences are reviewed, including EU EPCIP Plan and the US NIPP programme. → We discuss current methodologies and applications on critical infrastructure protection, with emphasis in electric networks.

  9. Electronic Business Transaction Infrastructure Analysis Using Petri Nets and Simulation

    Science.gov (United States)

    Feller, Andrew Lee

    2010-01-01

    Rapid growth in eBusiness has made industry and commerce increasingly dependent on the hardware and software infrastructure that enables high-volume transaction processing across the Internet. Large transaction volumes at major industrial-firm data centers rely on robust transaction protocols and adequately provisioned hardware capacity to ensure…

  10. Medical image informatics infrastructure design and applications.

    Science.gov (United States)

    Huang, H K; Wong, S T; Pietka, E

    1997-01-01

    Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.

  11. Automation of the software production process for multiple cryogenic control applications

    OpenAIRE

    Fluder, Czeslaw; Lefebvre, Victor; Pezzetti, Marco; Plutecki, Przemyslaw; Tovar-González, Antonio; Wolak, Tomasz

    2018-01-01

    The development of process control systems for the cryogenic infrastructure at CERN is based on an automatic software generation approach. The overall complexity of the systems, their frequent evolution as well as the extensive use of databases, repositories, commercial engineering software and CERN frameworks have led to further efforts towards improving the existing automation based software production methodology. A large number of control system upgrades were successfully performed for th...

  12. SITEGI Project: Applying Geotechnologies to Road Inspection. Sensor Integration and software processing

    Directory of Open Access Journals (Sweden)

    J. Martínez-Sánchez

    2013-10-01

    Full Text Available Infrastructure management represents a critical economic milestone. The current decision-making process in infrastructure rehabilitation is essentially based on qualitative parameters obtained from visual inspections and subject to the ability of technicians. In order to increase both efficiency and productivity in infrastructure management, this work addresses the integration of different instrumentation and sensors in a mobile mapping vehicle. This vehicle allows the continuous recording of quantitative data suitable for roadside inspection. The geometric integration and synchronization of these sensors is achieved through hardware and/or software strategies that permit the georeferencing of the data obtained with each sensor. In addition, a visualization software for simpler data management was implemented using Qt framework, PCL library and C++. As a result, the developed system supports the decision-making in road inspection, providing quantitative information suitable for sophisticated analysis systems.

  13. An Institutional Approach to Developing Research Data Management Infrastructure

    Directory of Open Access Journals (Sweden)

    James A. J. Wilson

    2011-10-01

    Full Text Available This article outlines the work that the University of Oxford is undertaking to implement a coordinated data management infrastructure. The rationale for the approach being taken by Oxford is presented, with particular attention paid to the role of each service division. This is followed by a consideration of the relative advantages and disadvantages of institutional data repositories, as opposed to national or international data centres. The article then focuses on two ongoing JISC-funded projects, ‘Embedding Institutional Data Curation Services in Research’ (Eidcsr and ‘Supporting Data Management Infrastructure for the Humanities’ (Sudamih. Both projects are intra-institutional collaborations and involve working with researchers to develop particular aspects of infrastructure, including: University policy, systems for the preservation and documentation of research data, training and support, software tools for the visualisation of large images, and creating and sharing databases via the Web (Database as a Service.

  14. System testing software deployments using Docker and Kubernetes in gitlab CI: EOS + CTA use case

    CERN Document Server

    CERN. Geneva

    2017-01-01

    It needs to be seamlessly integrated with `EOS`, which has become the de facto disk storage system at CERN. `CTA` and `EOS` integration requires parallel development of features in both software that needs to be **synchronized and systematically tested** on a specific distributed development infrastructure for each commit in the code base. This presentation describes the full gitlab continuous integration work flow that builds, tests, deploys and run system tests of the full software stack in docker containers on our specific kubernetes infrastructure.

  15. Cost Optimization Through Open Source Software

    Directory of Open Access Journals (Sweden)

    Mark VonFange

    2010-12-01

    Full Text Available The cost of information technology (IT as a percentage of overall operating and capital expenditures is growing as companies modernize their operations and as IT becomes an increasingly indispensable part of company resources. The price tag associated with IT infrastructure is a heavy one, and, in today's economy, companies need to look for ways to reduce overhead while maintaining quality operations and staying current with technology. With its advancements in availability, usability, functionality, choice, and power, free/libre open source software (F/LOSS provides a cost-effective means for the modern enterprise to streamline its operations. iXsystems wanted to quantify the benefits associated with the use of open source software at their company headquarters. This article is the outgrowth of our internal analysis of using open source software instead of commercial software in all aspects of company operations.

  16. Understanding the infrastructure of European Research Infrastructures

    DEFF Research Database (Denmark)

    Lindstrøm, Maria Duclos; Kropp, Kristoffer

    2017-01-01

    European Research Infrastructure Consortia (ERIC) are a new form of legal and financial framework for the establishment and operation of research infrastructures in Europe. Despite their scope, ambition, and novelty, the topic has received limited scholarly attention. This article analyses one ER....... It is also a promising theoretical framework for addressing the relationship between the ERIC construct and the large diversity of European Research Infrastructures.......European Research Infrastructure Consortia (ERIC) are a new form of legal and financial framework for the establishment and operation of research infrastructures in Europe. Despite their scope, ambition, and novelty, the topic has received limited scholarly attention. This article analyses one ERIC...... became an ERIC using the Bowker and Star’s sociology of infrastructures. We conclude that focusing on ERICs as a European standard for organising and funding research collaboration gives new insights into the problems of membership, durability, and standardisation faced by research infrastructures...

  17. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, S; Berzano, D; Brunetti, R; Lusso, S; Vallero, S

    2014-01-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  18. Cost Optimization of Water Resources in Pernambuco, Brazil: Valuing Future Infrastructure and Climate Forecasts

    Science.gov (United States)

    Kumar, Ipsita; Josset, Laureline; Lall, Upmanu; Cavalcanti e Silva, Erik; Cordeiro Possas, José Marcelo; Cauás Asfora, Marcelo

    2017-04-01

    Optimal management of water resources is paramount in semi-arid regions to limit strains on the society and economy due to limited water availability. This problem is likely to become even more recurrent as droughts are projected to intensify in the coming years, causing increasing stresses to the water supply in the concerned areas. The state of Pernambuco, in the Northeast Brazil is one such case, where one of the largest reservoir, Jucazinho, has been at approximately 1% capacity throughout 2016, making infrastructural challenges in the region very real. To ease some of the infrastructural stresses and reduce vulnerabilities of the water system, a new source of water from Rio São Francisco is currently under development. Till its development, water trucks have been regularly mandated to cover water deficits, but at a much higher cost, thus endangering the financial sustainability of the region. In this paper, we propose to evaluate the sustainability of the considered water system by formulating an optimization problem and determine the optimal operations to be conducted. We start with a comparative study of the current and future infrastructures capabilities to face various climate. We show that while the Rio Sao Francisco project mitigates the problems, both implementations do not prevent failure and require the reliance on water trucks during prolonged droughts. We also study the cost associated with the provision of water to the municipalities for several streamflow forecasts. In particular, we investigate the value of climate predictions to adapt operational decisions by comparing the results with a fixed policy derived from historical data. We show that the use of climate information permits the reduction of the water deficit and reduces overall operational costs. We conclude with a discussion on the potential of the approach to evaluate future infrastructure developments. This study is funded by the Inter-American Development Bank (IADB), and in

  19. Software-Defined Radio Demonstrators: An Example and Future Trends

    Directory of Open Access Journals (Sweden)

    Ronan Farrell

    2009-01-01

    Full Text Available Software-defined radio requires the combination of software-based signal processing and the enabling hardware components. In this paper, we present an overview of the criteria for such platforms and the current state of development and future trends in this area. This paper will also provide details of a high-performance flexible radio platform called the maynooth adaptable radio system (MARS that was developed to explore the use of software-defined radio concepts in the provision of infrastructure elements in a telecommunications application, such as mobile phone basestations or multimedia broadcasters.

  20. Restrictions on Software for Personal and Professional Use

    CERN Multimedia

    2004-01-01

    A growing number of computer security incidents detected at CERN are due to additional software installed for personal and professional use. As a consequence, the smooth operation of CERN is put at risk and often many hours are lost solving the problems. To reduce this security risk, installation and/or use of software on CERN's computing and network infrastructure needs to be restricted. Therefore: Do NOT install software for personal use Do NOT install 'free' or other software unless you have the expertise to configure and maintain it securely. Please comply to these rules to keep our computer systems safe. Further explanation of these restrictions is at http://cern.ch/security/software-restrictions Restricted software, known to cause security and/or network problems (e.g. KaZaA and other P2P/Peer-to-Peer file sharing applications, Skype P2P telephony software, ICQ, VNC, ...), is listed at: http://cern.ch/security/software-restrictions/list

  1. Cloud computing can simplify HIT infrastructure management.

    Science.gov (United States)

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  2. Gridification: Porting New Communities onto the WLCG/EGEE Infrastructure

    CERN Document Server

    Méndez-Lorenzo, P; Lamanna, M; Muraru, A

    2007-01-01

    The computational and storage capability of the Grid are attracting several research communities and we will discuss the general patterns observed in supporting new applications, porting them on the EGEE environment. In this talk we present the general infrastructure we have developed inside the application and support team at CERN (PSS and GD groups) to merge in a fast and feasible way all these applications inside the Grid, as for example Geant4, HARP, Garfield, UNOSAT or ITU. All these communities have different goals and requirements and the main challenge is the creation of a standard and general software infrastructure for the immersion of these communities onto the Grid. This general infrastructure effectively ‘shields’ the applications from the details of the Grid (the emphasis here is to run applications developed independently from the Grid middleware).It is stable enough to require few controls and supports by the members of the Grid team and also of the members of the user communities. Finally...

  3. Web accessibility and open source software.

    Science.gov (United States)

    Obrenović, Zeljko

    2009-07-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.

  4. The computing and data infrastructure to interconnect EEE stations

    Science.gov (United States)

    Noferini, F.; EEE Collaboration

    2016-07-01

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  5. The computing and data infrastructure to interconnect EEE stations

    Energy Technology Data Exchange (ETDEWEB)

    Noferini, F., E-mail: noferini@bo.infn.it [Museo Storico della Fisica e Centro Studi e Ricerche “Enrico Fermi”, Rome (Italy); INFN CNAF, Bologna (Italy)

    2016-07-11

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  6. The computing and data infrastructure to interconnect EEE stations

    International Nuclear Information System (INIS)

    Noferini, F.

    2016-01-01

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  7. FOSS Tools for Research Infrastructures - A Success Story?

    Science.gov (United States)

    Stender, V.; Schroeder, M.; Wächter, J.

    2015-12-01

    Established initiatives and mandated organizations, e.g. the Initiative for Scientific Cyberinfrastructures (NSF, 2007) or the European Strategy Forum on Research Infrastructures (ESFRI, 2008), promote and foster the development of sustainable research infrastructures. The basic idea behind these infrastructures is the provision of services supporting scientists to search, visualize and access data, to collaborate and exchange information, as well as to publish data and other results. Especially the management of research data is gaining more and more importance. In geosciences these developments have to be merged with the enhanced data management approaches of Spatial Data Infrastructures (SDI). The Centre for GeoInformationTechnology (CeGIT) at the GFZ German Research Centre for Geosciences has the objective to establish concepts and standards of SDIs as an integral part of research infrastructure architectures. In different projects, solutions to manage research data for land- and water management or environmental monitoring have been developed based on a framework consisting of Free and Open Source Software (FOSS) components. The framework provides basic components supporting the import and storage of data, discovery and visualization as well as data documentation (metadata). In our contribution, we present our data management solutions developed in three projects, Central Asian Water (CAWa), Sustainable Management of River Oases (SuMaRiO) and Terrestrial Environmental Observatories (TERENO) where FOSS components build the backbone of the data management platform. The multiple use and validation of tools helped to establish a standardized architectural blueprint serving as a contribution to Research Infrastructures. We examine the question of whether FOSS tools are really a sustainable choice and whether the increased efforts of maintenance are justified. Finally it should help to answering the question if the use of FOSS for Research Infrastructures is a

  8. Self-service for software development projects and HPC activities

    International Nuclear Information System (INIS)

    Husejko, M; Høimyr, N; Gonzalez, A; Koloventzos, G; Asbury, D; Trzcinska, A; Agtzidis, I; Botrel, G; Otto, J

    2014-01-01

    This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.

  9. The Chandra Source Catalog: Processing and Infrastructure

    Science.gov (United States)

    Evans, Janet; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Hall, Diane M.; Miller, Joseph B.; Plummer, David A.; Zografou, Panagoula; Primini, Francis A.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.

    2009-09-01

    Chandra Source Catalog processing recalibrates each observation using the latest available calibration data, and employs a wavelet-based source detection algorithm to identify all the X-ray sources in the field of view. Source properties are then extracted from each detected source that is a candidate for inclusion in the catalog. Catalog processing is completed by matching sources across multiple observations, merging common detections, and applying quality assurance checks. The Chandra Source Catalog processing system shares a common processing infrastructure and utilizes much of the functionality that is built into the Standard Data Processing (SDP) pipeline system that provides calibrated Chandra data to end-users. Other key components of the catalog processing system have been assembled from the portable CIAO data analysis package. Minimal new software tool development has been required to support the science algorithms needed for catalog production. Since processing pipelines must be instantiated for each detected source, the number of pipelines that are run during catalog construction is a factor of order 100 times larger than for SDP. The increased computational load, and inherent parallel nature of the processing, is handled by distributing the workload across a multi-node Beowulf cluster. Modifications to the SDP automated processing application to support catalog processing, and extensions to Chandra Data Archive software to ingest and retrieve catalog products, complete the upgrades to the infrastructure to support catalog processing.

  10. AWARE-P: a collaborative, system-based IAM planning software

    OpenAIRE

    Coelho, S. T.; Vitorino, D.

    2011-01-01

    The AWARE-P project aims to promote the application of integrated and risk-based approaches to the rehabilitation of urban water supply and wastewater drainage systems. Central to the project is the development of a software platform based on a set of computational components, which assist in the analyses and decision support involved in the planning process for sustainable infrastructural asset management. The AWARE-P software system brings together onto a common platform the inf...

  11. Software, component, and service deployment in computational Grids

    International Nuclear Information System (INIS)

    von Laszewski, G.; Blau, E.; Bletzinger, M.; Gawor, J.; Lane, P.; Martin, S.; Russell, M.

    2002-01-01

    Grids comprise an infrastructure that enables scientists to use a diverse set of distributed remote services and resources as part of complex scientific problem-solving processes. We analyze some of the challenges involved in deploying software and components transparently in Grids. We report on three practical solutions used by the Globus Project. Lessons learned from this experience lead us to believe that it is necessary to support a variety of software and component deployment strategies. These strategies are based on the hosting environment

  12. INFRASTRUCTURE

    CERN Document Server

    A. Gaddi

    2011-01-01

    During the last winter technical stop, a number of corrective maintenance activities and infrastructure consolidation work-packages were completed. On the surface, the site cooling facility has passed the annual maintenance process that includes the cleaning of the two evaporative cooling towers, the maintenance of the chiller units and the safety checks on the software controls. In parallel, CMS teams, reinforced by PH-DT group personnel, have worked to shield the cooling gauges for TOTEM and CASTOR against the magnetic stray field in the CMS Forward region, to add labels to almost all the valves underground and to clean all the filters in UXC55, USC55 and SCX5. Following the insertion of TOTEM T1 detector, the cooling circuit has been branched off and commissioned. The demineraliser cartridges have been replaced as well, as they were shown to be almost saturated. New instrumentation has been installed in the SCX5 PC farm cooling and ventilation network, in order to monitor the performance of the HVAC system...

  13. School Area Road Safety Assessment and Improvements (SARSAI) programme reduces road traffic injuries among children in Tanzania.

    Science.gov (United States)

    Poswayo, Ayikai; Kalolo, Simon; Rabonovitz, Katheryn; Witte, Jeffrey; Guerrero, Alejandro

    2018-05-19

    To determine the impact of a paediatric road traffic injury (RTI) prevention programme in urban Sub-Saharan Africa. Dares Salaam, Republic of Tanzania. Household surveys were conducted in catchment areas around 18 primary schools in Dar es Salaam, Republic of Tanzania; the catchment areas were divided into control and intervention groups. Collected data included basic demographic information on all school-aged household members and whether or not they had been involved in an RTI in the previous 12 months, and, if so, what the characteristics of that RTI were. Based on these findings, a separate road safety engineering site analysis and consultation with the communities and other stakeholders, an injury-prevention programme was developed and implemented, consisting of infrastructure enhancements and a site-specific educational programme. The programme was initially implemented at the intervention schools. After 1 year, data were collected in the same manner. The control group received the same intervention after follow-up data were collected. Data were collected on 12 957 school-aged children in the baseline period and 13 555 school-aged children in the post-intervention period, in both the control and intervention communities. There was a statistically significant reduction in RTIs in the intervention group and a non-significant increase in RTI in the control group. The greatest reduction was in motorcycle-pedestrian RTI, private vehicle-pedestrian RTI and morning RTI. The programme demonstrated a significant reduction in paediatric RTI after its implementation, in very specific ways. This study demonstrates that for a reasonable investment, scientifically driven injury-prevention programmes are feasible in resource-limited settings with high paediatric RTI rates. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Pricing Digital Goods: Discontinuous Costs and Shared Infrastructure

    OpenAIRE

    Ke-Wei Huang; Arun Sundararajan

    2006-01-01

    We develop and analyze a model of pricing for digital products with discontinuous supply functions. This characterizes a number of information technology-based products and services for which variable increases in demand are fulfilled by the addition of "blocks" of computing or network infrastructure. Examples include internet service, telephony, online trading, on-demand software, digital music, streamed video-on-demand and grid computing. These goods are often modeled as information goods w...

  15. A Roadmap to Continuous Integration for ATLAS Software Development

    Science.gov (United States)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million lines of C++ and 1.4 million lines of python code. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This paper describes the CI incorporation program for the ATLAS software infrastructure. It brings modern open source tools such as Jenkins and GitLab into the ATLAS Nightly System, rationalizes hardware resource allocation and administrative operations, provides improved feedback and means to fix broken builds promptly for developers. Once adopted, ATLAS CI practices will improve and accelerate innovation cycles and result in increased confidence in new software deployments. The paper reports the status of Jenkins integration with the ATLAS Nightly System as well as short and long term plans for the incorporation of CI practices.

  16. Increasing the resilience and security of the United States' power infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Happenny, Sean F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-08-01

    The United States' power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power infrastructure control and distribution paradigms by utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Understanding how these systems behave in real-world conditions will lead to new ways to make our power infrastructure more resilient and secure. Demonstrating security in embedded systems is another research area PNNL is tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the aging networks protecting them are becoming easier to attack.

  17. Long-term preservation of analysis software environment

    International Nuclear Information System (INIS)

    Toppe Larsen, Dag; Blomer, Jakob; Buncic, Predrag; Charalampidis, Ioannis; Haratyunyan, Artem

    2012-01-01

    Long-term preservation of scientific data represents a challenge to experiments, especially regarding the analysis software. Preserving data is not enough; the full software and hardware environment is needed. Virtual machines (VMs) make it possible to preserve hardware “in software”. A complete infrastructure package has been developed for easy deployment and management of VMs, based on CERN virtual machine (CernVM). Further, a HTTP-based file system, CernVM file system (CVMFS), is used for the distribution of the software. It is possible to process data with any given software version, and a matching, regenerated VM version. A point-and-click web user interface is being developed for setting up the complete processing chain, including VM and software versions, number and type of processing nodes, and the particular type of analysis and data. This paradigm also allows for distributed cloud-computing on private and public clouds, for both legacy and contemporary experiments.

  18. Integrating and Managing Bim in GIS, Software Review

    Science.gov (United States)

    El Meouche, R.; Rezoug, M.; Hijazi, I.

    2013-08-01

    Since the advent of Computer-Aided Design (CAD) and Geographical Information System (GIS) tools, project participants have been increasingly leveraging these tools throughout the different phases of a civil infrastructure project. In recent years the number of GIS software that provides tools to enable the integration of Building information in geo context has risen sharply. More and more GIS software are added tools for this purposes and other software projects are regularly extending these tools. However, each software has its different strength and weakness and its purpose of use. This paper provides a thorough review to investigate the software capabilities and clarify its purpose. For this study, Autodesk Revit 2012 i.e. BIM editor software was used to create BIMs. In the first step, three building models were created, the resulted models were converted to BIM format and then the software was used to integrate it. For the evaluation of the software, general characteristics was studied such as the user interface, what formats are supported (import/export), and the way building information are imported.

  19. Software Configurable Multichannel Transceiver

    Science.gov (United States)

    Freudinger, Lawrence C.; Cornelius, Harold; Hickling, Ron; Brooks, Walter

    2009-01-01

    Emerging test instrumentation and test scenarios increasingly require network communication to manage complexity. Adapting wireless communication infrastructure to accommodate challenging testing needs can benefit from reconfigurable radio technology. A fundamental requirement for a software-definable radio system is independence from carrier frequencies, one of the radio components that to date has seen only limited progress toward programmability. This paper overviews an ongoing project to validate the viability of a promising chipset that performs conversion of radio frequency (RF) signals directly into digital data for the wireless receiver and, for the transmitter, converts digital data into RF signals. The Software Configurable Multichannel Transceiver (SCMT) enables four transmitters and four receivers in a single unit the size of a commodity disk drive, programmable for any frequency band between 1 MHz and 6 GHz.

  20. Information technologies in optimization process of monitoring of software and hardware status

    Science.gov (United States)

    Nikitin, P. V.; Savinov, A. N.; Bazhenov, R. I.; Ryabov, I. V.

    2018-05-01

    The article describes a model of a hardware and software monitoring system for a large company that provides customers with software as a service (SaaS solution) using information technology. The main functions of the monitoring system are: provision of up-todate data for analyzing the state of the IT infrastructure, rapid detection of the fault and its effective elimination. The main risks associated with the provision of these services are described; the comparative characteristics of the software are given; author's methods of monitoring the status of software and hardware are proposed.

  1. Continuous Software Quality analysis for the ATLAS experiment

    CERN Document Server

    Washbrook, Andrew; The ATLAS collaboration

    2017-01-01

    The regular application of software quality tools in large collaborative projects is required to reduce code defects to an acceptable level. If left unchecked the accumulation of defects invariably results in performance degradation at scale and problems with the long-term maintainability of the code. Although software quality tools are effective for identification there remains a non-trivial sociological challenge to resolve defects in a timely manner. This is a ongoing concern for the ATLAS software which has evolved over many years to meet the demands of Monte Carlo simulation, detector reconstruction and data analysis. At present over 3.8 million lines of C++ code (and close to 6 million total lines of code) are maintained by a community of hundreds of developers worldwide. It is therefore preferable to address code defects before they are introduced into a widely used software release. Recent wholesale changes to the ATLAS software infrastructure have provided an ideal opportunity to apply software quali...

  2. GPS Software Packages Deliver Positioning Solutions

    Science.gov (United States)

    2010-01-01

    "To determine a spacecraft s position, the Jet Propulsion Laboratory (JPL) developed an innovative software program called the GPS (global positioning system)-Inferred Positioning System and Orbit Analysis Simulation Software, abbreviated as GIPSY-OASIS, and also developed Real-Time GIPSY (RTG) for certain time-critical applications. First featured in Spinoff 1999, JPL has released hundreds of licenses for GIPSY and RTG, including to Longmont, Colorado-based DigitalGlobe. Using the technology, DigitalGlobe produces satellite imagery with highly precise latitude and longitude coordinates and then supplies it for uses within defense and intelligence, civil agencies, mapping and analysis, environmental monitoring, oil and gas exploration, infrastructure management, Internet portals, and navigation technology."

  3. Multi-Level Data-Security and Data-Protection in a Distributed Search Infrastructure for Digital Medical Samples.

    Science.gov (United States)

    Witt, Michael; Krefting, Dagmar

    2016-01-01

    Human sample data is stored in biobanks with software managing digital derived sample data. When these stand-alone components are connected and a search infrastructure is employed users become able to collect required research data from different data sources. Data protection, patient rights, data heterogeneity and access control are major challenges for such an infrastructure. This dissertation will investigate concepts for a multi-level security architecture to comply with these requirements.

  4. Executable Behavioral Modeling of System and Software Architecture Specifications to Inform Resourcing Decisions

    Science.gov (United States)

    2016-09-01

    Infrastructure software projects: 1. The earliest phases or spiral cycles will generally involve prototyping, using the Application Composition model...case needs to achieve, expected results, post conditions, information about the environment, infrastructure to support execution of the tests, and...Leanpub, Feb. 5, 2016. [Online]. Available: http:// microservices -book.com/. Accessed Aug. 20, 2016. [59] M. Farah-Stapleton, M. Auguston, R. Madachy

  5. Assessing the Vulnerability of Large Critical Infrastructure Using Fully-Coupled Blast Effects Modeling

    Energy Technology Data Exchange (ETDEWEB)

    McMichael, L D; Noble, C R; Margraf, J D; Glascoe, L G

    2009-03-26

    Structural failures, such as the MacArthur Maze I-880 overpass in Oakland, California and the I-35 bridge in Minneapolis, Minnesota, are recent examples of our national infrastructure's fragility and serve as an important reminder of such infrastructure in our everyday lives. These two failures, as well as the World Trade Center's collapse and the levee failures in New Orleans, highlight the national importance of protecting our infrastructure as much as possible against acts of terrorism and natural hazards. This paper describes a process for evaluating the vulnerability of critical infrastructure to large blast loads using a fully-coupled finite element approach. A description of the finite element software and modeling technique is discussed along with the experimental validation of the numerical tools. We discuss how such an approach can be used for specific problems such as modeling the progressive collapse of a building.

  6. Security infrastructure for dynamically provisioned cloud infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; de Laat, C.; Lopez, D.R.; Morales, A.; García-Espín, J.A.; Pearson, S.; Yee, G.

    2013-01-01

    This chapter discusses conceptual issues, basic requirements and practical suggestions for designing dynamically configured security infrastructure provisioned on demand as part of the cloud-based infrastructure. This chapter describes general use cases for provisioning cloud infrastructure services

  7. Optimization of Advanced ACTPol Transition Edge Sensor Bolometer Operation Using R(T,I) Transition Measurements

    Science.gov (United States)

    Salatino, Maria

    2017-06-01

    In the current submm and mm cosmology experiments the focal planes are populated by kilopixel transition edge sensors (TESes). Varying incoming power load requires frequent rebiasing of the TESes through standard current-voltage (IV) acquisition. The time required to perform IVs on such large arrays and the resulting transient heating of the bath reduces the sky observation time. We explore a bias step method that significantly reduces the time required for the rebiasing process. This exploits the detectors' responses to the injection of a small square wave signal on top of the dc bias current and knowledge of the shape of the detector transition R(T,I). This method has been tested on two detector arrays of the Atacama Cosmology Telescope (ACT). In this paper, we focus on the first step of the method, the estimate of the TES %Rn.

  8. Probabilistic modelling of security of supply in gas networks and evaluation of new infrastructure

    International Nuclear Information System (INIS)

    Praks, Pavel; Kopustinskas, Vytis; Masera, Marcelo

    2015-01-01

    The paper presents a probabilistic model to study security of supply in a gas network. The model is based on Monte-Carlo simulations with graph theory, and is implemented in the software tool ProGasNet. The software allows studying gas networks in various aspects including identification of weakest links and nodes, vulnerability analysis, bottleneck analysis, evaluation of new infrastructure etc. In this paper ProGasNet is applied to a benchmark network based on a real EU gas transmission network of several countries with the purpose of evaluating the security of supply effects of new infrastructure, either under construction, recently completed or under planning. The probabilistic model enables quantitative evaluations by comparing the reliability of gas supply in each consuming node of the network. - Highlights: • A Monte-Carlo algorithm for stochastic flow networks is presented. • Network elements can fail according to a given probabilistic model. • Priority supply pattern of gas transmission networks is assumed. • A real-world EU gas transmission network is presented and analyzed. • A risk ratio is used for security of supply quantification of a new infrastructure.

  9. Bottom-up, top-down? Connecting software architecture design with use

    DEFF Research Database (Denmark)

    Büscher, Monika; Christensen, Michael; Hansen, Klaus Marius

    2009-01-01

    Participatory design has traditinally focused on the design of technology applications or the co-realisation of a more holostic socio-technical bricolage of new and existing technologies and pratices. 'Infrastructural' design issues like software architectures, programming languages, communicatio...

  10. Building a Community Infrastructure for Scalable On-Line Performance Analysis Tools around Open|SpeedShop

    Energy Technology Data Exchange (ETDEWEB)

    Galarowicz, James E. [Krell Institute, Ames, IA (United States); Miller, Barton P. [Univ. of Wisconsin, Madison, WI (United States). Computer Sciences Dept.; Hollingsworth, Jeffrey K. [Univ. of Maryland, College Park, MD (United States). Computer Sciences Dept.; Roth, Philip [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Future Technologies Group, Computer Science and Math Division; Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing (CASC)

    2013-12-19

    In this project we created a community tool infrastructure for program development tools targeting Petascale class machines and beyond. This includes tools for performance analysis, debugging, and correctness tools, as well as tuning and optimization frameworks. The developed infrastructure provides a comprehensive and extensible set of individual tool building components. We started with the basic elements necessary across all tools in such an infrastructure followed by a set of generic core modules that allow a comprehensive performance analysis at scale. Further, we developed a methodology and workflow that allows others to add or replace modules, to integrate parts into their own tools, or to customize existing solutions. In order to form the core modules, we built on the existing Open|SpeedShop infrastructure and decomposed it into individual modules that match the necessary tool components. At the same time, we addressed the challenges found in performance tools for petascale systems in each module. When assembled, this instantiation of community tool infrastructure provides an enhanced version of Open|SpeedShop, which, while completely different in its architecture, provides scalable performance analysis for petascale applications through a familiar interface. This project also built upon and enhances capabilities and reusability of project partner components as specified in the original project proposal. The overall project team’s work over the project funding cycle was focused on several areas of research, which are described in the following sections. The reminder of this report also highlights related work as well as preliminary work that supported the project. In addition to the project partners funded by the Office of Science under this grant, the project team included several collaborators who contribute to the overall design of the envisioned tool infrastructure. In particular, the project team worked closely with the other two DOE NNSA

  11. My private cloud overview : a trust, privacy and security infrastructure for the cloud

    NARCIS (Netherlands)

    Chadwick, D.W.; Lievens, S.F.; Hartog, den J.I.; Pashalidis, A.; Alhadeff, J.

    2011-01-01

    Based on the assumption that cloud providers can be trusted (to a certain extent) we define a trust, security and privacy preserving infrastructure that relies on trusted cloud providers to operate properly. Working in tandem with legal agreements, our open source software supports: trust and

  12. WILDFIRE IGNITION RESISTANCE ESTIMATOR WIZARD SOFTWARE DEVELOPMENT REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, M.; Robinson, C.; Gupta, N.; Werth, D.

    2012-10-10

    This report describes the development of a software tool, entitled “WildFire Ignition Resistance Estimator Wizard” (WildFIRE Wizard, Version 2.10). This software was developed within the Wildfire Ignition Resistant Home Design (WIRHD) program, sponsored by the U. S. Department of Homeland Security, Science and Technology Directorate, Infrastructure Protection & Disaster Management Division. WildFIRE Wizard is a tool that enables homeowners to take preventive actions that will reduce their home’s vulnerability to wildfire ignition sources (i.e., embers, radiant heat, and direct flame impingement) well in advance of a wildfire event. This report describes the development of the software, its operation, its technical basis and calculations, and steps taken to verify its performance.

  13. The case for open-source software in drug discovery.

    Science.gov (United States)

    DeLano, Warren L

    2005-02-01

    Widespread adoption of open-source software for network infrastructure, web servers, code development, and operating systems leads one to ask how far it can go. Will "open source" spread broadly, or will it be restricted to niches frequented by hopeful hobbyists and midnight hackers? Here we identify reasons for the success of open-source software and predict how consumers in drug discovery will benefit from new open-source products that address their needs with increased flexibility and in ways complementary to proprietary options.

  14. The Information Technology Infrastructure for the Translational Genomics Core and the Partners Biobank at Partners Personalized Medicine

    Directory of Open Access Journals (Sweden)

    Natalie Boutin

    2016-01-01

    Full Text Available The Biobank and Translational Genomics core at Partners Personalized Medicine requires robust software and hardware. This Information Technology (IT infrastructure enables the storage and transfer of large amounts of data, drives efficiencies in the laboratory, maintains data integrity from the time of consent to the time that genomic data is distributed for research, and enables the management of complex genetic data. Here, we describe the functional components of the research IT infrastructure at Partners Personalized Medicine and how they integrate with existing clinical and research systems, review some of the ways in which this IT infrastructure maintains data integrity and security, and discuss some of the challenges inherent to building and maintaining such infrastructure.

  15. Final report for the Integrated and Robust Security Infrastructure (IRSI) laboratory directed research and development project

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, R.L.; Hamilton, V.A.; Istrail, G.G.; Espinoza, J.; Murphy, M.D.

    1997-11-01

    This report describes the results of a Sandia-funded laboratory-directed research and development project titled {open_quotes}Integrated and Robust Security Infrastructure{close_quotes} (IRSI). IRSI was to provide a broad range of commercial-grade security services to any software application. IRSI has two primary goals: application transparency and manageable public key infrastructure. IRSI must provide its security services to any application without the need to modify the application to invoke the security services. Public key mechanisms are well suited for a network with many end users and systems. There are many issues that make it difficult to deploy and manage a public key infrastructure. IRSI addressed some of these issues to create a more manageable public key infrastructure.

  16. Transactional Infrastructure of the Economy: the Evolution of Concepts and Synthesis of Definitions

    Directory of Open Access Journals (Sweden)

    Maruschak Irina Valeryevna

    2017-03-01

    Full Text Available The overview of evolution of market infrastructure concepts is provided, the first concepts of institutional infrastructure are revealed in the paper. Evolutionarily developed narrowing of essence of infrastructure in connection with the priority analysis of its physical (material and technological components is proved. It ignores the fact that transactional resources, being drivers (driving forces of economic systems evolution, in turn evolve, becoming harder and harder, combining increase in efficiency of the elements and strengthening of heterogeneity and discrepancy of their structure. Transactional evolution of economy in general and evolution of separate transactional resources of production are the perspective directions of the special analysis. Transactional infrastructure is considered as the integrated complex of institutional, organizational (relational and information infrastructures. The problems of the first concepts of transactional infrastructure connected with difficulties of differentiation of its subsystems always operating jointly are revealed. Prospect of transition from the isolated analysis of separate resources of transactional type (institutes, organizations, information, social capital, trust, etc. to studying corresponding specific software infrastructures and to the system analysis of integrated transactional infrastructure of economy are argued. The transactional sector (as set of the specialized industries and the appropriate collective and individual subjects providing with resources market transaction is offered to be considered as transactional structure of economy. Transactional infrastructure is treated as critically significant factor of economic evolution which in the conditions of post-industrial type of economy gradually purchases transactional nature.

  17. Helix Nebula: Enabling federation of existing data infrastructures and data services to an overarching cross-domain e-infrastructure

    Science.gov (United States)

    Lengert, Wolfgang; Farres, Jordi; Lanari, Riccardo; Casu, Francesco; Manunta, Michele; Lassalle-Balier, Gerard

    2014-05-01

    so called "Supersite Exploitation Platform" (SSEP) provides scientists an overarching federated e-infrastructure with a very fast access to (i) large volume of data (EO/non-space data), (ii) computing resources (e.g. hybrid cloud/grid), (iii) processing software (e.g. toolboxes, RTMs, retrieval baselines, visualization routines), and (iv) general platform capabilities (e.g. user management and access control, accounting, information portal, collaborative tools, social networks etc.). In this federation each data provider remains in full control of the implementation of its data policy. This presentation outlines the Architecture (technical and services) supporting very heterogeneous science domains as well as the procedures for new-comers to join the Helix Nebula Market Place. Ref.1 http://cds.cern.ch/record/1374172/files/CERN-OPEN-2011-036.pdf

  18. Modernization of the USGS Hawaiian Volcano Observatory Seismic Processing Infrastructure

    Science.gov (United States)

    Antolik, L.; Shiro, B.; Friberg, P. A.

    2016-12-01

    The USGS Hawaiian Volcano Observatory (HVO) operates a Tier 1 Advanced National Seismic System (ANSS) seismic network to monitor, characterize, and report on volcanic and earthquake activity in the State of Hawaii. Upgrades at the observatory since 2009 have improved the digital telemetry network, computing resources, and seismic data processing with the adoption of the ANSS Quake Management System (AQMS) system. HVO aims to build on these efforts by further modernizing its seismic processing infrastructure and strengthen its ability to meet ANSS performance standards. Most notably, this will also allow HVO to support redundant systems, both onsite and offsite, in order to provide better continuity of operation during intermittent power and network outages. We are in the process of implementing a number of upgrades and improvements on HVO's seismic processing infrastructure, including: 1) Virtualization of AQMS physical servers; 2) Migration of server operating systems from Solaris to Linux; 3) Consolidation of AQMS real-time and post-processing services to a single server; 4) Upgrading database from Oracle 10 to Oracle 12; and 5) Upgrading to the latest Earthworm and AQMS software. These improvements will make server administration more efficient, minimize hardware resources required by AQMS, simplify the Oracle replication setup, and provide better integration with HVO's existing state of health monitoring tools and backup system. Ultimately, it will provide HVO with the latest and most secure software available while making the software easier to deploy and support.

  19. NASA: A generic infrastructure for system-level MP-SoC design space exploration

    NARCIS (Netherlands)

    Jia, Z.J.; Pimentel, A.D.; Thompson, M.; Bautista, T.; Núñez, A.

    2010-01-01

    System-level simulation and design space exploration (DSE) are key ingredients for the design of multiprocessor system-on-chip (MP-SoC) based embedded systems. The efforts in this area, however, typically use ad-hoc software infrastructures to facilitate and support the system-level DSE experiments.

  20. Remote software upload techniques in future vehicles and their performance analysis

    Science.gov (United States)

    Hossain, Irina

    Updating software in vehicle Electronic Control Units (ECUs) will become a mandatory requirement for a variety of reasons, for examples, to update/fix functionality of an existing system, add new functionality, remove software bugs and to cope up with ITS infrastructure. Software modules of advanced vehicles can be updated using Remote Software Upload (RSU) technique. The RSU employs infrastructure-based wireless communication technique where the software supplier sends the software to the targeted vehicle via a roadside Base Station (BS). However, security is critically important in RSU to avoid any disasters due to malfunctions of the vehicle or to protect the proprietary algorithms from hackers, competitors or people with malicious intent. In this thesis, a mechanism of secure software upload in advanced vehicles is presented which employs mutual authentication of the software provider and the vehicle using a pre-shared authentication key before sending the software. The software packets are sent encrypted with a secret key along with the Message Digest (MD). In order to increase the security level, it is proposed the vehicle to receive more than one copy of the software along with the MD in each copy. The vehicle will install the new software only when it receives more than one identical copies of the software. In order to validate the proposition, analytical expressions of average number of packet transmissions for successful software update is determined. Different cases are investigated depending on the vehicle's buffer size and verification methods. The analytical and simulation results show that it is sufficient to send two copies of the software to the vehicle to thwart any security attack while uploading the software. The above mentioned unicast method for RSU is suitable when software needs to be uploaded to a single vehicle. Since multicasting is the most efficient method of group communication, updating software in an ECU of a large number of vehicles

  1. Building a Science Software Institute: Synthesizing the Lessons Learned from the ISEES and WSSI Software Institute Conceptualization Efforts

    Science.gov (United States)

    Idaszak, R.; Lenhardt, W. C.; Jones, M. B.; Ahalt, S.; Schildhauer, M.; Hampton, S. E.

    2014-12-01

    The NSF, in an effort to support the creation of sustainable science software, funded 16 science software institute conceptualization efforts. The goal of these conceptualization efforts is to explore approaches to creating the institutional, sociological, and physical infrastructures to support sustainable science software. This paper will present the lessons learned from two of these conceptualization efforts, the Institute for Sustainable Earth and Environmental Software (ISEES - http://isees.nceas.ucsb.edu) and the Water Science Software Institute (WSSI - http://waters2i2.org). ISEES is a multi-partner effort led by National Center for Ecological Analysis and Synthesis (NCEAS). WSSI, also a multi-partner effort, is led by the Renaissance Computing Institute (RENCI). The two conceptualization efforts have been collaborating due to the complementarity of their approaches and given the potential synergies of their science focus. ISEES and WSSI have engaged in a number of activities to address the challenges of science software such as workshops, hackathons, and coding efforts. More recently, the two institutes have also collaborated on joint activities including training, proposals, and papers. In addition to presenting lessons learned, this paper will synthesize across the two efforts to project a unified vision for a science software institute.

  2. High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Energy Technology Data Exchange (ETDEWEB)

    Habib, Salman [Argonne National Lab. (ANL), Argonne, IL (United States); Roser, Robert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); LeCompte, Tom [Argonne National Lab. (ANL), Argonne, IL (United States); Marshall, Zach [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Borgland, Anders [SLAC National Accelerator Lab., Menlo Park, CA (United States); Viren, Brett [Brookhaven National Lab. (BNL), Upton, NY (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Asai, Makato [SLAC National Accelerator Lab., Menlo Park, CA (United States); Bauerdick, Lothar [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Gottlieb, Steve [Indiana Univ., Bloomington, IN (United States); Hoeche, Stefan [SLAC National Accelerator Lab., Menlo Park, CA (United States); Sheldon, Paul [Vanderbilt Univ., Nashville, TN (United States); Vay, Jean-Luc [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Elmer, Peter [Princeton Univ., NJ (United States); Kirby, Michael [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Patton, Simon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Potekhin, Maxim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yanny, Brian [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Calafiura, Paolo [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gutsche, Oliver [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Izubuchi, Taku [Brookhaven National Lab. (BNL), Upton, NY (United States); Lyon, Adam [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Petravick, Don [Univ. of Illinois, Urbana-Champaign, IL (United States). National Center for Supercomputing Applications (NCSA)

    2015-10-29

    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.

  3. High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Energy Technology Data Exchange (ETDEWEB)

    Habib, Salman [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Roser, Robert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2015-10-28

    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.

  4. Software for roof defects recognition on aerial photographs

    Science.gov (United States)

    Yudin, D.; Naumov, A.; Dolzhenko, A.; Patrakova, E.

    2018-05-01

    The article presents information on software for roof defects recognition on aerial photographs, made with air drones. An areal image segmentation mechanism is described. It allows detecting roof defects – unsmoothness that causes water stagnation after rain. It is shown that HSV-transformation approach allows quick detection of stagnation areas, their size and perimeters, but is sensitive to shadows and changes of the roofing-types. Deep Fully Convolutional Network software solution eliminates this drawback. The tested data set consists of the roofing photos with defects and binary masks for them. FCN approach gave acceptable results of image segmentation in Dice metric average value. This software can be used in inspection automation of roof conditions in the production sector and housing and utilities infrastructure.

  5. Software Quality Control at Belle II

    Science.gov (United States)

    Ritter, M.; Kuhr, T.; Hauth, T.; Gebard, T.; Kristof, M.; Pulvermacher, C.; Belle Software Group, II

    2017-10-01

    Over the last seven years the software stack of the next generation B factory experiment Belle II has grown to over one million lines of C++ and Python code, counting only the part included in offline software releases. There are several thousand commits to the central repository by about 100 individual developers per year. To keep a coherent software stack of high quality that it can be sustained and used efficiently for data acquisition, simulation, reconstruction, and analysis over the lifetime of the Belle II experiment is a challenge. A set of tools is employed to monitor the quality of the software and provide fast feedback to the developers. They are integrated in a machinery that is controlled by a buildbot master and automates the quality checks. The tools include different compilers, cppcheck, the clang static analyzer, valgrind memcheck, doxygen, a geometry overlap checker, a check for missing or extra library links, unit tests, steering file level tests, a sophisticated high-level validation suite, and an issue tracker. The technological development infrastructure is complemented by organizational means to coordinate the development.

  6. Abstracting application deployment on Cloud infrastructures

    Science.gov (United States)

    Aiftimiei, D. C.; Fattibene, E.; Gargana, R.; Panella, M.; Salomoni, D.

    2017-10-01

    Deploying a complex application on a Cloud-based infrastructure can be a challenging task. In this contribution we present an approach for Cloud-based deployment of applications and its present or future implementation in the framework of several projects, such as “!CHAOS: a cloud of controls” [1], a project funded by MIUR (Italian Ministry of Research and Education) to create a Cloud-based deployment of a control system and data acquisition framework, “INDIGO-DataCloud” [2], an EC H2020 project targeting among other things high-level deployment of applications on hybrid Clouds, and “Open City Platform”[3], an Italian project aiming to provide open Cloud solutions for Italian Public Administrations. We considered to use an orchestration service to hide the complex deployment of the application components, and to build an abstraction layer on top of the orchestration one. Through Heat [4] orchestration service, we prototyped a dynamic, on-demand, scalable platform of software components, based on OpenStack infrastructures. On top of the orchestration service we developed a prototype of a web interface exploiting the Heat APIs. The user can start an instance of the application without having knowledge about the underlying Cloud infrastructure and services. Moreover, the platform instance can be customized by choosing parameters related to the application such as the size of a File System or the number of instances of a NoSQL DB cluster. As soon as the desired platform is running, the web interface offers the possibility to scale some infrastructure components. In this contribution we describe the solution design and implementation, based on the application requirements, the details of the development of both the Heat templates and of the web interface, together with possible exploitation strategies of this work in Cloud data centers.

  7. Security infrastructure for on-demand provisioned Cloud infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; de Laat, C.; Wlodarczyk, T.W.; Rong, C.; Ziegler, W.

    2011-01-01

    Providing consistent security services in on-demand provisioned Cloud infrastructure services is of primary importance due to multi-tenant and potentially multi-provider nature of Clouds Infrastructure as a Service (IaaS) environment. Cloud security infrastructure should address two aspects of the

  8. LLVM Infrastructure and Tools Project Summary

    Energy Technology Data Exchange (ETDEWEB)

    McCormick, Patrick Sean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-06

    This project works with the open source LLVM Compiler Infrastructure (http://llvm.org) to provide tools and capabilities that address needs and challenges faced by ECP community (applications, libraries, and other components of the software stack). Our focus is on providing a more productive development environment that enables (i) improved compilation times and code generation for parallelism, (ii) additional features/capabilities within the design and implementations of LLVM components for improved platform/performance portability and (iii) improved aspects related to composition of the underlying implementation details of the programming environment, capturing resource utilization, overheads, etc. -- including runtime systems that are often not easily addressed by application and library developers.

  9. The CMS software performance at the start of data taking

    CERN Document Server

    Benelli, Gabriele

    2009-01-01

    The CMS software framework (CMSSW) is a complex project evolving very rapidly as the first LHC colliding beams approach. The computing requirements constrain performance in terms of CPU time, memory footprint and event size on disk to allow for planning and managing the computing infrastructure necessary to handle the needs of the experiment. A performance suite of tools has been developed to track all aspects of code performance, through the software release cycles, allowing for regression and guiding code development for optimization. In this talk, we describe the CMSSW performance suite tools used and present some sample performance results from the release integration process for the CMS software.

  10. BASTILLE - Better Analysis Software to Treat ILL Experiments - a unified, unifying approach to data reduction and analysis

    International Nuclear Information System (INIS)

    Johnson, M.

    2011-01-01

    Data reduction and analysis is a key component in the production of scientific results. If this component, like any other in the chain, is weak, the final output is compromised. The current situation for data reduction and analysis may be regarded as adequate, but it is variable, depending on the instrument, and should be improved. In particular the delivery of new and upgraded instruments in Millennium Phase I and those proposed for Phase II will bring new demands and challenges for software development. Failure to meet these challenges will hamper the exploitation of higher data rates and the delivery of new science. The proposed project is to provide a single, underpinning software infrastructure for data analysis, which would ensure: 1) a clear vision of software provision at ILL; 2) a clear role for the 'Computing for Science' Group (CS) in maintaining and developing the infrastructure and the codes; 3) a well-defined framework for recruiting and training CS staff; 4) ease and efficiency of development within a common, well-defined software environment; 5) safeguarding of key, existing software; and 6) ease of communication with other software like instrument control software to allow real-time data analysis and experiment control, or software from other institutes or sources

  11. Impact of Infrastructure and Production Processes on Rioja Wine Supply Chain Performance

    Directory of Open Access Journals (Sweden)

    José Roberto Díaz-Reza

    2018-01-01

    Full Text Available This paper presents a structural equation model for analyzing the relationship between four latent variables: infrastructure, production processes, transport benefits, and economic benefits within the supply chain for wine from La Rioja, Spain, by incorporating 12 observed variables. The model proposes six hypothesis that were tested using information gathered from 64 surveys completed by managers of several wineries in the region. The WarpPLS v.5® software (Version 5.0, Script Warp Systems, Laredo, TX, USA was used to execute the model and analyze the direct, indirect, and total effects among latent variables. The results show that the control of production processes is a direct source of economic and transport benefits because of its higher explanatory power of those variables. Similarly, infrastructure is a direct source of transport and production benefits, and some of them are given indirectly. In addition, infrastructure does not have a direct effect on economic benefits; however, there were indirect effects given through production process and transport benefits. Infrastructure is a very important variable because of its influence in the final performance, but also because of its high environmental impact. Finally, economic benefits were explained in 43.8%, 19.1% belonging to production process, 21.1% coming from transport benefits, and 3.7% from infrastructure.

  12. Reliable Software Development for Machine Protection Systems

    CERN Document Server

    Anderson, D; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Misiowiec, K; Stamos, K; Zerlauth, M

    2014-01-01

    The Controls software for the Large Hadron Collider (LHC) at CERN, with more than 150 millions lines of code, resides amongst the largest known code bases in the world1. Industry has been applying Agile software engineering techniques for more than two decades now, and the advantages of these techniques can no longer be ignored to manage the code base for large projects within the accelerator community. Furthermore, CERN is a particular environment due to the high personnel turnover and manpower limitations, where applying Agile processes can improve both, the codebase management as well as its quality. This paper presents the successful application of the Agile software development process Scrum for machine protection systems at CERN, the quality standards and infrastructure introduced together with the Agile process as well as the challenges encountered to adapt it to the CERN environment.

  13. Cyber Security Threats to Safety-Critical, Space-Based Infrastructures

    Science.gov (United States)

    Johnson, C. W.; Atencia Yepez, A.

    2012-01-01

    Space-based systems play an important role within national critical infrastructures. They are being integrated into advanced air-traffic management applications, rail signalling systems, energy distribution software etc. Unfortunately, the end users of communications, location sensing and timing applications often fail to understand that these infrastructures are vulnerable to a wide range of security threats. The following pages focus on concerns associated with potential cyber-attacks. These are important because future attacks may invalidate many of the safety assumptions that support the provision of critical space-based services. These safety assumptions are based on standard forms of hazard analysis that ignore cyber-security considerations This is a significant limitation when, for instance, security attacks can simultaneously exploit multiple vulnerabilities in a manner that would never occur without a deliberate enemy seeking to damage space based systems and ground infrastructures. We address this concern through the development of a combined safety and security risk assessment methodology. The aim is to identify attack scenarios that justify the allocation of additional design resources so that safety barriers can be strengthened to increase our resilience against security threats.

  14. The Five 'R's' for Developing Trusted Software Frameworks to increase confidence in, and maximise reuse of, Open Source Software.

    Science.gov (United States)

    Fraser, Ryan; Gross, Lutz; Wyborn, Lesley; Evans, Ben; Klump, Jens

    2015-04-01

    Recent investments in HPC, cloud and Petascale data stores, have dramatically increased the scale and resolution that earth science challenges can now be tackled. These new infrastructures are highly parallelised and to fully utilise them and access the large volumes of earth science data now available, a new approach to software stack engineering needs to be developed. The size, complexity and cost of the new infrastructures mean any software deployed has to be reliable, trusted and reusable. Increasingly software is available via open source repositories, but these usually only enable code to be discovered and downloaded. As a user it is hard for a scientist to judge the suitability and quality of individual codes: rarely is there information on how and where codes can be run, what the critical dependencies are, and in particular, on the version requirements and licensing of the underlying software stack. A trusted software framework is proposed to enable reliable software to be discovered, accessed and then deployed on multiple hardware environments. More specifically, this framework will enable those who generate the software, and those who fund the development of software, to gain credit for the effort, IP, time and dollars spent, and facilitate quantification of the impact of individual codes. For scientific users, the framework delivers reviewed and benchmarked scientific software with mechanisms to reproduce results. The trusted framework will have five separate, but connected components: Register, Review, Reference, Run, and Repeat. 1) The Register component will facilitate discovery of relevant software from multiple open source code repositories. The registration process of the code should include information about licensing, hardware environments it can be run on, define appropriate validation (testing) procedures and list the critical dependencies. 2) The Review component is targeting on the verification of the software typically against a set of

  15. The Role of Free/Libre and Open Source Software in Learning Health Systems.

    Science.gov (United States)

    Paton, C; Karopka, T

    2017-08-01

    Objective: To give an overview of the role of Free/Libre and Open Source Software (FLOSS) in the context of secondary use of patient data to enable Learning Health Systems (LHSs). Methods: We conducted an environmental scan of the academic and grey literature utilising the MedFLOSS database of open source systems in healthcare to inform a discussion of the role of open source in developing LHSs that reuse patient data for research and quality improvement. Results: A wide range of FLOSS is identified that contributes to the information technology (IT) infrastructure of LHSs including operating systems, databases, frameworks, interoperability software, and mobile and web apps. The recent literature around the development and use of key clinical data management tools is also reviewed. Conclusions: FLOSS already plays a critical role in modern health IT infrastructure for the collection, storage, and analysis of patient data. The nature of FLOSS systems to be collaborative, modular, and modifiable may make open source approaches appropriate for building the digital infrastructure for a LHS. Georg Thieme Verlag KG Stuttgart.

  16. Inside a VAMDC data node—putting standards into practical software

    Science.gov (United States)

    Regandell, Samuel; Marquart, Thomas; Piskunov, Nikolai

    2018-03-01

    Access to molecular and atomic data is critical for many forms of remote sensing analysis across different fields. Many atomic and molecular databases are however highly specialised for their intended application, complicating querying and combination data between sources. The Virtual Atomic and Molecular Data Centre, VAMDC, is an electronic infrastructure that allows each database to register as a ‘node’. Through services such as VAMDC’s portal website, users can then access and query all nodes in a homogenised way. Today all major Atomic and Molecular databases are attached to VAMDC This article describes the software tools we developed to help data providers create and manage a VAMDC node. It gives an overview of the VAMDC infrastructure and of the various standards it uses. The article then discusses the development choices made and how the standards are implemented in practice. It concludes with a full example of implementing a VAMDC node using a real-life case as well as future plans for the node software.

  17. Development and Operation of the D-Grid Infrastructure

    Science.gov (United States)

    Fieseler, Thomas; Gűrich, Wolfgang

    D-Grid is the German national grid initiative, granted by the German Federal Ministry of Education and Research. In this paper we present the Core D-Grid which acts as a condensation nucleus to build a production grid and the latest developments of the infrastructure. The main difference compared to other international grid initiatives is the support of three middleware systems, namely LCG/gLite, Globus, and UNICORE for compute resources. Storage resources are connected via SRM/dCache and OGSA-DAI. In contrast to homogeneous communities, the partners in Core D-Grid have different missions and backgrounds (computing centres, universities, research centres), providing heterogeneous hardware from single processors to high performance supercomputing systems with different operating systems. We present methods to integrate these resources and services for the DGrid infrastructure like a point of information, centralized user and virtual organization management, resource registration, software provision, and policies for the implementation (firewalls, certificates, user mapping).

  18. AWARE: Adaptive Software Monitoring and Dynamic Reconfiguration for Critical Infrastructure Protection

    Science.gov (United States)

    2015-04-29

    in which we applied these adaptation patterns to an adaptive news web server intended to tolerate extremely heavy, unexpected loads. To address...collection of existing models used as benchmarks for OO-based refactoring and an existing web -based repository called REMODD to provide users with model...invariant properties. Specifically, we developed Avida- MDE (based on the Avida digital evolution platform) to support the automatic generation of software

  19. A Component-based Software Development and Execution Framework for CAx Applications

    Directory of Open Access Journals (Sweden)

    N. Matsuki

    2004-01-01

    Full Text Available Digitalization of the manufacturing process and technologies is regarded as the key to increased competitive ability. The MZ-Platform infrastructure is a component-based software development framework, designed for supporting enterprises to enhance digitalized technologies using software tools and CAx components in a self-innovative way. In the paper we show the algorithm, system architecture, and a CAx application example on MZ-Platform. We also propose a new parametric data structure based on MZ-Platform.

  20. Software engineering and automatic continuous verification of scientific software

    Science.gov (United States)

    Piggott, M. D.; Hill, J.; Farrell, P. E.; Kramer, S. C.; Wilson, C. R.; Ham, D.; Gorman, G. J.; Bond, T.

    2011-12-01

    Software engineering of scientific code is challenging for a number of reasons including pressure to publish and a lack of awareness of the pitfalls of software engineering by scientists. The Applied Modelling and Computation Group at Imperial College is a diverse group of researchers that employ best practice software engineering methods whilst developing open source scientific software. Our main code is Fluidity - a multi-purpose computational fluid dynamics (CFD) code that can be used for a wide range of scientific applications from earth-scale mantle convection, through basin-scale ocean dynamics, to laboratory-scale classic CFD problems, and is coupled to a number of other codes including nuclear radiation and solid modelling. Our software development infrastructure consists of a number of free tools that could be employed by any group that develops scientific code and has been developed over a number of years with many lessons learnt. A single code base is developed by over 30 people for which we use bazaar for revision control, making good use of the strong branching and merging capabilities. Using features of Canonical's Launchpad platform, such as code review, blueprints for designing features and bug reporting gives the group, partners and other Fluidity uers an easy-to-use platform to collaborate and allows the induction of new members of the group into an environment where software development forms a central part of their work. The code repositoriy are coupled to an automated test and verification system which performs over 20,000 tests, including unit tests, short regression tests, code verification and large parallel tests. Included in these tests are build tests on HPC systems, including local and UK National HPC services. The testing of code in this manner leads to a continuous verification process; not a discrete event performed once development has ceased. Much of the code verification is done via the "gold standard" of comparisons to analytical

  1. Architecture of the local spatial data infrastructure for regional climate change research

    Science.gov (United States)

    Titov, Alexander; Gordov, Evgeny

    2013-04-01

    Georeferenced datasets (meteorological databases, modeling and reanalysis results, etc.) are actively used in modeling and analysis of climate change for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their size which might constitute up to tens terabytes for a single dataset studies in the area of climate and environmental change require a special software support based on SDI approach. A dedicated architecture of the local spatial data infrastructure aiming at regional climate change analysis using modern web mapping technologies is presented. Geoportal is a key element of any SDI, allowing searching of geoinformation resources (datasets and services) using metadata catalogs, producing geospatial data selections by their parameters (data access functionality) as well as managing services and applications of cartographical visualization. It should be noted that due to objective reasons such as big dataset volume, complexity of data models used, syntactic and semantic differences of various datasets, the development of environmental geodata access, processing and visualization services turns out to be quite a complex task. Those circumstances were taken into account while developing architecture of the local spatial data infrastructure as a universal framework providing geodata services. So that, the architecture presented includes: 1. Effective in terms of search, access, retrieval and subsequent statistical processing, model of storing big sets of regional georeferenced data, allowing in particular to store frequently used values (like monthly and annual climate change indices, etc.), thus providing different temporal views of the datasets 2. General architecture of the corresponding software components handling geospatial datasets within the storage model 3. Metadata catalog describing in detail using ISO 19115 and CF-convention standards datasets used in climate researches as a basic element of the

  2. Open-Source Software in Computational Research: A Case Study

    Directory of Open Access Journals (Sweden)

    Sreekanth Pannala

    2008-04-01

    Full Text Available A case study of open-source (OS development of the computational research software MFIX, used for multiphase computational fluid dynamics simulations, is presented here. The verification and validation steps required for constructing modern computational software and the advantages of OS development in those steps are discussed. The infrastructure used for enabling the OS development of MFIX is described. The impact of OS development on computational research and education in gas-solids flow, as well as the dissemination of information to other areas such as geophysical and volcanology research, is demonstrated. This study shows that the advantages of OS development were realized in the case of MFIX: verification by many users, which enhances software quality; the use of software as a means for accumulating and exchanging information; the facilitation of peer review of the results of computational research.

  3. Bike Infrastructures

    DEFF Research Database (Denmark)

    Silva, Victor; Harder, Henrik; Jensen, Ole B.

    Bike Infrastructures aims to identify bicycle infrastructure typologies and design elements that can help promote cycling significantly. It is structured as a case study based research where three cycling infrastructures with distinct typologies were analyzed and compared. The three cases......, the findings of this research project can also support bike friendly design and planning, and cyclist advocacy....

  4. Quantitative Analysis of the Educational Infrastructure in Colombia Through the Use of a Georeferencing Software and Analytic Hierarchy Process

    Science.gov (United States)

    Cala Estupiñan, Jose Luis; María González Bernal, Lina; Ponz Tienda, Jose Luis; Gutierrez Bucheli, Laura Andrea; Alejandro Arboleda, Carlos

    2017-10-01

    The distribution policies of the national budget have been showing an increasing trend of the investment in education infrastructure. This is the reason that makes it necessary to identify the territories with the greatest number of facilities (such as schools, colleges, universities and libraries) and those lacking this type of infrastructure, in order to know where a possible government intervention is required. This work is not intended to give a judgment on the qualitative state of the national infrastructure. It focuses, in terms of infrastructure, on Colombia’s quantitative status of the educational sector, by identifying the territories with more facilities, such as schools, colleges, universities and public libraries. To do this a quantitative index will be created to identify if the coverage of educational infrastructure at departmental level is enough, by taking into account not only the number of facilities, but also the population and the area of influence each one has. The above study is framed within a project of the University of the Andes called “visible Infrastructure”. The index is obtained through a hierarchical analytical process (AHP) and subsequently a linear equation that reflects the variables investigated. The validation of this index is performed through correlations and regressions of social, economic and cultural indicators determined by official entities. All the information on which the analysis is based is official and public. With the end of the armed conflict, it is necessary to focus the planning of public policies to heal the social gaps that the most vulnerable population needs.

  5. An Extensible Software Infrastructure for Computer Aided Custom Monitoring of Patients in Smart Homes

    OpenAIRE

    Ritwik Dutta; Marilyn Wolf

    2015-01-01

    This paper describes the tradeoffs and the design from scratch of a self-contained, easy-to-use health dashboard software system that provides customizable data tracking for patients in smart homes. The system is made up of different software modules and comprises a front-end and a back-end component. Built with HTML, CSS, and JavaScript, the front-end allows adding users, logging into the system, selecting metrics, and specifying health goals. The backend consists of a N...

  6. Copyright and personal use of CERN’s computing infrastructure

    CERN Multimedia

    IT Department

    2009-01-01

    (La version française sera en ligne prochainement)The rules covering the personal use of CERN’s computing infrastructure are defined in Operational Circular No. 5 and its Subsidiary Rules (see http://cern.ch/ComputingRules). All users of CERN’s computing infrastructure must comply with these rules, whether they access CERN’s computing facilities from within the Organization’s site or at another location. In particular, OC5 clause 17 requires that proprietary rights (the rights in software, music, video, etc.) must be respected. The user is liable for damages resulting from non-compliance. Recently, there have been several violations of OC5, where copyright material was discovered on public world-readable disk space. Please ensure that all material under your responsibility (in particular in files owned by your account) respects proprietary rights, including with respect to the restriction of access by third parties. CERN Security Team

  7. caCORE: a common infrastructure for cancer informatics.

    Science.gov (United States)

    Covitz, Peter A; Hartel, Frank; Schaefer, Carl; De Coronado, Sherri; Fragoso, Gilberto; Sahni, Himanso; Gustafson, Scott; Buetow, Kenneth H

    2003-12-12

    Sites with substantive bioinformatics operations are challenged to build data processing and delivery infrastructure that provides reliable access and enables data integration. Locally generated data must be processed and stored such that relationships to external data sources can be presented. Consistency and comparability across data sets requires annotation with controlled vocabularies and, further, metadata standards for data representation. Programmatic access to the processed data should be supported to ensure the maximum possible value is extracted. Confronted with these challenges at the National Cancer Institute Center for Bioinformatics, we decided to develop a robust infrastructure for data management and integration that supports advanced biomedical applications. We have developed an interconnected set of software and services called caCORE. Enterprise Vocabulary Services (EVS) provide controlled vocabulary, dictionary and thesaurus services. The Cancer Data Standards Repository (caDSR) provides a metadata registry for common data elements. Cancer Bioinformatics Infrastructure Objects (caBIO) implements an object-oriented model of the biomedical domain and provides Java, Simple Object Access Protocol and HTTP-XML application programming interfaces. caCORE has been used to develop scientific applications that bring together data from distinct genomic and clinical science sources. caCORE downloads and web interfaces can be accessed from links on the caCORE web site (http://ncicb.nci.nih.gov/core). caBIO software is distributed under an open source license that permits unrestricted academic and commercial use. Vocabulary and metadata content in the EVS and caDSR, respectively, is similarly unrestricted, and is available through web applications and FTP downloads. http://ncicb.nci.nih.gov/core/publications contains links to the caBIO 1.0 class diagram and the caCORE 1.0 Technical Guide, which provide detailed information on the present caCORE architecture

  8. Better software, better research: the challenge of preserving your research and your reputation

    Science.gov (United States)

    Chue Hong, N.

    2017-12-01

    Software is fundamental to research. From short, thrown-together temporary scripts, through an abundance of complex spreadsheets analysing collected data, to the hundreds of software engineers and millions of lines of code behind international efforts such as the Large Hadron Collider and the Square Kilometre Array, software has made an invaluable contribution to advancing our research knowledge. Within the earth and space sciences, data is being generated, collected, processed and analysed in ever greater amounts and detail. However the pace of this improvement leads to challenges around the persistence of research outputs and artefacts. A specific challenge in this field is that often experiments and measurements cannot be repeated, yet the infrastructure used to manage, store and process this data must be continually updated and developed: constant change just to stay still. The UK-based Software Sustainability Institute (SSI) aims to improve research software sustainability, working with researchers, funders, research software engineers, managers, and other stakeholders across the research spectrum. In this talk, I will present lessons learned and good practice based on the work of the Institute and its collaborators. I will summarise some of the work that is being done to improve the integration of infrastructure for managing research outputs, including around software citation and reward, extending data management plans, and improving researcher skills: "better software, better research". Ultimately, being a modern researcher in the geosciences requires you to efficiently balance the pursuit of new knowledge with making your work reusable and reproducible. And as scientists are placed under greater scrutiny about whether others can trust their results, the preservation of your artefacts has a key role in the preservation of your reputation.

  9. Biotechnology software in the digital age: are you winning?

    Science.gov (United States)

    Scheitz, Cornelia Johanna Franziska; Peck, Lawrence J; Groban, Eli S

    2018-01-16

    There is a digital revolution taking place and biotechnology companies are slow to adapt. Many pharmaceutical, biotechnology, and industrial bio-production companies believe that software must be developed and maintained in-house and that data are more secure on internal servers than on the cloud. In fact, most companies in this space continue to employ large IT and software teams and acquire computational infrastructure in the form of in-house servers. This is due to a fear of the cloud not sufficiently protecting in-house resources and the belief that their software is valuable IP. Over the next decade, the ability to quickly adapt to changing market conditions, with agile software teams, will quickly become a compelling competitive advantage. Biotechnology companies that do not adopt the new regime may lose on key business metrics such as return on invested capital, revenue, profitability, and eventually market share.

  10. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Schnoor, Ulrike; The ATLAS collaboration

    2017-01-01

    High-Performance Computing (HPC) and other research cluster computing resources provided by universities can be useful supplements to the collaboration’s own WLCG computing resources for data analysis and production of simulated event samples. The shared HPC cluster "NEMO" at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. The talk describes the concept and implementation of virtualizing the ATLAS software environment to run both data analysis and production on the HPC host system which is connected to the existing Tier-3 infrastructure. Main challenges include the integration into the NEMO and Tier-3 schedulers in a dynamic, on-demand way, the scalability of the OpenStack infrastructure, as well as the automatic generation of a fully functional virtual machine image providing access to the local user environment, the dCache storage element and the parallel file sys...

  11. Enhanced computational infrastructure for data analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; McHarg, B.B.; Meyer, W.H.; Parker, C.T.

    2000-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from nine national laboratories, 19 foreign laboratories, 16 universities, and five industrial partnerships. As a result of this work, DIII-D data is available on a 24x7 basis from a set of viewing and analysis tools that can be run on either the collaborators' or DIII-D's computer systems. Additionally, a web based data and code documentation system has been created to aid the novice and expert user alike

  12. Enhanced Computational Infrastructure for Data Analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; Meyer, W.H.; Parker, C.T.; McCharg, B.B.

    1999-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from 9 national laboratories, 19 foreign laboratories, 16 universities, and 5 industrial partnerships. As a result of this work, DIII-D data is available on a 24 x 7 basis from a set of viewing and analysis tools that can be run either on the collaborators' or DIII-Ds computer systems. Additionally, a Web based data and code documentation system has been created to aid the novice and expert user alike

  13. Open geospatial infrastructure for data management and analytics in interdisciplinary research

    DEFF Research Database (Denmark)

    Jeppesen, Jacob Høxbroe; Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg

    2018-01-01

    , and information and communications technology needed to promote the implementation of precision agriculture is limited by proprietary integrations and non-standardized data formats and connections. In this paper, an open geospatial data infrastructure is presented, based on standards defined by the Open...... software, and was complemented by open data from governmental offices along with ESA satellite imagery. Four use cases are presented, covering analysis of nearly 50 000 crop fields and providing seamless interaction with an emulated machine terminal. They act to showcase both for how the infrastructure......The terms Internet of Things and Big Data are currently subject to much attention, though the specific impact of these terms in our practical lives are difficult to apprehend. Data-driven approaches do lead to new possibilities, and significant improvements within a broad range of domains can...

  14. The Challenges of the "Software Support for Industrial Controls" Contract

    CERN Document Server

    Ninin, P

    2000-01-01

    ST division is currently specifying a 'Software Support for Industrial Controls' contract. The application of this contract and its success will require several changes in our habits for specifying, designing, and maintaining control systems. This paper summarizes some key concepts which should be respected in order to obtain maximum benefits from the future contract and to optimize the software activities in the division. The contract concerns the maintenance and development of the monitoring and control systems used for supervising CERN's technical infrastructure (electrical distribution, cooling water, air conditioning, safety, and access control). The systems concerned consist of computer and communication hardware and software, tailored to provide specific functionalities for the remote operation, command, and monitoring of equipment. All these systems use commercially available software and hardware such as SCADA, PLCs and associated drivers, controllers, fieldbuses, and networks. It is intended to cont...

  15. The Computational Infrastructure for Geodynamics as a Community of Practice

    Science.gov (United States)

    Hwang, L.; Kellogg, L. H.

    2016-12-01

    Computational Infrastructure for Geodynamics (CIG), geodynamics.org, originated in 2005 out of community recognition that the efforts of individual or small groups of researchers to develop scientifically-sound software is impossible to sustain, duplicates effort, and makes it difficult for scientists to adopt state-of-the art computational methods that promote new discovery. As a community of practice, participants in CIG share an interest in computational modeling in geodynamics and work together on open source software to build the capacity to support complex, extensible, scalable, interoperable, reliable, and reusable software in an effort to increase the return on investment in scientific software development and increase the quality of the resulting software. The group interacts regularly to learn from each other and better their practices formally through webinar series, workshops, and tutorials and informally through listservs and hackathons. Over the past decade, we have learned that successful scientific software development requires at a minimum: collaboration between domain-expert researchers, software developers and computational scientists; clearly identified and committed lead developer(s); well-defined scientific and computational goals that are regularly evaluated and updated; well-defined benchmarks and testing throughout development; attention throughout development to usability and extensibility; understanding and evaluation of the complexity of dependent libraries; and managed user expectations through education, training, and support. CIG's code donation standards provide the basis for recently formalized best practices in software development (geodynamics.org/cig/dev/best-practices/). Best practices include use of version control; widely used, open source software libraries; extensive test suites; portable configuration and build systems; extensive documentation internal and external to the code; and structured, human readable input formats.

  16. Network-Embedded Management and Applications Understanding Programmable Networking Infrastructure

    CERN Document Server

    Wolter, Ralf

    2013-01-01

    Despite the explosion of networking services and applications in the past decades, the basic technological underpinnings of the Internet have remained largely unchanged. At its heart are special-purpose appliances that connect us to the digital world, commonly known as switches and routers. Now, however, the traditional framework is being increasingly challenged by new methods that are jostling for a position in the next-generation Internet. The concept of a network that is becoming more programmable is one of the aspects that are taking center stage. This opens new possibilities to embed software applications inside the network itself and to manage networks and communications services with unprecedented ease and efficiency. In this edited volume, distinguished experts take the reader on a tour of different facets of programmable network infrastructure and application exploit it. Presenting the state of the art in network embedded management and applications and programmable network infrastructure, the book c...

  17. The Earth System Grid Federation : an Open Infrastructure for Access to Distributed Geospatial Data

    Science.gov (United States)

    Cinquini, Luca; Crichton, Daniel; Mattmann, Chris; Harney, John; Shipman, Galen; Wang, Feiyi; Ananthakrishnan, Rachana; Miller, Neill; Denvil, Sebastian; Morgan, Mark; hide

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF's architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).

  18. Green(ing) infrastructure

    CSIR Research Space (South Africa)

    Van Wyk, Llewellyn V

    2014-03-01

    Full Text Available the generation of electricity from renewable sources such as wind, water and solar. Grey infrastructure – In the context of storm water management, grey infrastructure can be thought of as the hard, engineered systems to capture and convey runoff..., pumps, and treatment plants.  Green infrastructure reduces energy demand by reducing the need to collect and transport storm water to a suitable discharge location. In addition, green infrastructure such as green roofs, street trees and increased...

  19. Freight railway transport: Critical variables to improve the transport applied to infrastructure costs and its associated traffic flow

    Energy Technology Data Exchange (ETDEWEB)

    Zakowska, L.; Pulawska-Obiedowska, S.

    2016-07-01

    The developed societies have as challenge, among others, to achieve a mobility development based on economic models of low carbon and energy efficient, making it accessible to the entire population. In this context, the sustainable mobility seems to meet the economic, social and environmental needs, minimizing their negative impact. There are three factors that are relevant: (1) infrastructures; (2) modes of transport more ecological and safe, and (3) operations and services for passengers and freights.The objective of this research is to provide guidance to investment in sustainable transport infrastructures that are truly useful and effective. In particular we have studied the case of the railway, using the following information: details of the infrastructure; cost of construction (per kilometre); maintenance cost, and life cycle. This information may be relevant to consider their possible business models.The methodology of this research was focused in the detailed analysis of the infrastructure use and maintenance criteria, the market opportunities for freight development and the available data to validate the obtained results from the software tool reached in this work. Our research includes the different following aspects:• Evaluation of the supported traffic by the rail line.• Relevant items to be considered in the rail infrastructure. Defining the track, we can group items in two sets: civil and rail installations.• Rolling stock available. Locomotives and wagons are modelled to introduce the data as convenience for the user.Besides our research includes the development of software, Decision System Tool (DST), for studying the construction and maintenance cost of railway infrastructure. It is developed in a common and open source program, providing the user the interaction with the critical variable of the line. It has been adjusted using the following references: MOM PlanCargorail; EcoTransIT, and Projects funded by Framework Program of EU (New

  20. Chromium Renderserver: Scalable and Open Source Remote RenderingInfrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Brian; Ahern, Sean; Bethel, E. Wes; Brugger, Eric; Cook,Rich; Daniel, Jamison; Lewis, Ken; Owen, Jens; Southard, Dale

    2007-12-01

    Chromium Renderserver (CRRS) is software infrastructure thatprovides the ability for one or more users to run and view image outputfrom unmodified, interactive OpenGL and X11 applications on a remote,parallel computational platform equipped with graphics hardwareaccelerators via industry-standard Layer 7 network protocolsand clientviewers. The new contributions of this work include a solution to theproblem of synchronizing X11 and OpenGL command streams, remote deliveryof parallel hardware-accelerated rendering, and a performance analysis ofseveral different optimizations that are generally applicable to avariety of rendering architectures. CRRSis fully operational, Open Sourcesoftware.

  1. Software for virtual accelerator designing

    International Nuclear Information System (INIS)

    Kulabukhova, N.; Ivanov, A.; Korkhov, V.; Lazarev, A.

    2012-01-01

    The article discusses appropriate technologies for software implementation of the Virtual Accelerator. The Virtual Accelerator is considered as a set of services and tools enabling transparent execution of computational software for modeling beam dynamics in accelerators on distributed computing resources. Distributed storage and information processing facilities utilized by the Virtual Accelerator make use of the Service-Oriented Architecture (SOA) according to a cloud computing paradigm. Control system tool-kits (such as EPICS, TANGO), computing modules (including high-performance computing), realization of the GUI with existing frameworks and visualization of the data are discussed in the paper. The presented research consists of software analysis for realization of interaction between all levels of the Virtual Accelerator and some samples of middle-ware implementation. A set of the servers and clusters at St.-Petersburg State University form the infrastructure of the computing environment for Virtual Accelerator design. Usage of component-oriented technology for realization of Virtual Accelerator levels interaction is proposed. The article concludes with an overview and substantiation of a choice of technologies that will be used for design and implementation of the Virtual Accelerator. (authors)

  2. Deployment of the CMS software on the WLCG Grid

    International Nuclear Information System (INIS)

    Behrenhoff, W; Wissing, C; Kim, B; Blyweert, S; D'Hondt, J; Maes, J; Maes, M; Mulders, P Van; Villella, I; Vanelderen, L

    2011-01-01

    The CMS Experiment is taking high energy collision data at CERN. The computing infrastructure used to analyse the data is distributed round the world in a tiered structure. In order to use the 7 Tier-1 sites, the 50 Tier-2 sites and a still growing number of about 30 Tier-3 sites, the CMS software has to be available at those sites. Except for a very few sites the deployment and the removal of CMS software is managed centrally. Since the deployment team has no local accounts at the remote sites all installation jobs have to be sent via Grid jobs. Via a VOMS role the job has a high priority in the batch system and gains write privileges to the software area. Due to the lack of interactive access the installation jobs must be very robust against possible failures, in order not to leave a broken software installation. The CMS software is packaged in RPMs that are installed in the software area independent of the host OS. The apt-get tool is used to resolve package dependencies. This paper reports about the recent deployment experiences and the achieved performance.

  3. Creating safer coastal and port infrastructure with innovative physical and numerical modelling

    CSIR Research Space (South Africa)

    Tulsi, K

    2015-10-01

    Full Text Available Infrastructure with Innovative Physical and Numerical Modelling Kishan Tulsi  Physical and Numerical modelling  Breakwater Monitoring  Armour track  Vessel manoeuvring simulations for safe port design and operations  Simflex software... – Integrated Port Operations Support System  Virtual Buoy Physical modelling Numerical modelling Armour Track Armour Track Armour Track Armour Track Armour track using 3D data points Ship manoeuvring simulations: Ship Manoeuvring simulations Port...

  4. ggCyto: Next Generation Open-Source Visualization Software for Cytometry.

    Science.gov (United States)

    Van, Phu; Jiang, Wenxin; Gottardo, Raphael; Finak, Greg

    2018-06-01

    Open source software for computational cytometry has gained in popularity over the past few years. Efforts such as FlowCAP, the Lyoplate and Euroflow projects have highlighted the importance of efforts to standardize both experimental and computational aspects of cytometry data analysis. The R/BioConductor platform hosts the largest collection of open source cytometry software covering all aspects of data analysis and providing infrastructure to represent and analyze cytometry data with all relevant experimental, gating, and cell population annotations enabling fully reproducible data analysis. Data visualization frameworks to support this infrastructure have lagged behind. ggCyto is a new open-source BioConductor software package for cytometry data visualization built on ggplot2 that enables ggplot-like functionality with the core BioConductor flow cytometry data structures. Amongst its features are the ability to transform data and axes on-the-fly using cytometry-specific transformations, plot faceting by experimental meta-data variables, and partial matching of channel, marker and cell populations names to the contents of the BioConductor cytometry data structures. We demonstrate the salient features of the package using publicly available cytometry data with complete reproducible examples in a supplementary material vignette. https://bioconductor.org/packages/devel/bioc/html/ggcyto.html. gfinak@fredhutch.org. Supplementary data are available at Bioinformatics online and at http://rglab.org/ggcyto/.

  5. Greening infrastructure

    CSIR Research Space (South Africa)

    Van Wyk, Llewellyn V

    2014-10-01

    Full Text Available The development and maintenance of infrastructure is crucial to improving economic growth and quality of life (WEF 2013). Urban infrastructure typically includes bulk services such as water, sanitation and energy (typically electricity and gas...

  6. BioContainers: an open-source and community-driven framework for software standardization

    Science.gov (United States)

    da Veiga Leprevost, Felipe; Grüning, Björn A.; Alves Aflitos, Saulo; Röst, Hannes L.; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C.; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I.; Perez-Riverol, Yasset

    2017-01-01

    Abstract Motivation BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). Availability and Implementation The software is freely available at github.com/BioContainers/. Contact yperez@ebi.ac.uk PMID:28379341

  7. BioContainers: an open-source and community-driven framework for software standardization.

    Science.gov (United States)

    da Veiga Leprevost, Felipe; Grüning, Björn A; Alves Aflitos, Saulo; Röst, Hannes L; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I; Perez-Riverol, Yasset

    2017-08-15

    BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). The software is freely available at github.com/BioContainers/. yperez@ebi.ac.uk. © The Author(s) 2017. Published by Oxford University Press.

  8. Flowscapes : Infrastructure as landscape, landscape as infrastructure. Graduation Lab Landscape Architecture 2012/2013

    NARCIS (Netherlands)

    Nijhuis, S.; Jauslin, D.; De Vries, C.

    2012-01-01

    Flowscapes explores infrastructure as a type of landscape and landscape as a type of infrastructure, and is focused on landscape architectonic design of transportation-, green- and water infrastructures. These landscape infrastructures are considered armatures for urban and rural development. With

  9. ATLAS software stack on ARM64

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00529764; The ATLAS collaboration; Stewart, Graeme; Seuster, Rolf; Quadt, Arnulf

    2017-01-01

    This paper reports on the port of the ATLAS software stack onto new prototype ARM64 servers. This included building the “external” packages that the ATLAS software relies on. Patches were needed to introduce this new architecture into the build as well as patches that correct for platform specific code that caused failures on non-x86 architectures. These patches were applied such that porting to further platforms will need no or only very little adjustments. A few additional modifications were needed to account for the different operating system, Ubuntu instead of Scientific Linux 6 / CentOS7. Selected results from the validation of the physics outputs on these ARM 64-bit servers will be shown. CPU, memory and IO intensive benchmarks using ATLAS specific environment and infrastructure have been performed, with a particular emphasis on the performance vs. energy consumption.

  10. ATLAS software stack on ARM64

    Science.gov (United States)

    Smith, Joshua Wyatt; Stewart, Graeme A.; Seuster, Rolf; Quadt, Arnulf; ATLAS Collaboration

    2017-10-01

    This paper reports on the port of the ATLAS software stack onto new prototype ARM64 servers. This included building the “external” packages that the ATLAS software relies on. Patches were needed to introduce this new architecture into the build as well as patches that correct for platform specific code that caused failures on non-x86 architectures. These patches were applied such that porting to further platforms will need no or only very little adjustments. A few additional modifications were needed to account for the different operating system, Ubuntu instead of Scientific Linux 6 / CentOS7. Selected results from the validation of the physics outputs on these ARM 64-bit servers will be shown. CPU, memory and IO intensive benchmarks using ATLAS specific environment and infrastructure have been performed, with a particular emphasis on the performance vs. energy consumption.

  11. Protocol independent transmission method in software defined optical network

    Science.gov (United States)

    Liu, Yuze; Li, Hui; Hou, Yanfang; Qiu, Yajun; Ji, Yuefeng

    2016-10-01

    With the development of big data and cloud computing technology, the traditional software-defined network is facing new challenges (e.i., ubiquitous accessibility, higher bandwidth, more flexible management and greater security). Using a proprietary protocol or encoding format is a way to improve information security. However, the flow, which carried by proprietary protocol or code, cannot go through the traditional IP network. In addition, ultra- high-definition video transmission service once again become a hot spot. Traditionally, in the IP network, the Serial Digital Interface (SDI) signal must be compressed. This approach offers additional advantages but also bring some disadvantages such as signal degradation and high latency. To some extent, HD-SDI can also be regard as a proprietary protocol, which need transparent transmission such as optical channel. However, traditional optical networks cannot support flexible traffics . In response to aforementioned challenges for future network, one immediate solution would be to use NFV technology to abstract the network infrastructure and provide an all-optical switching topology graph for the SDN control plane. This paper proposes a new service-based software defined optical network architecture, including an infrastructure layer, a virtualization layer, a service abstract layer and an application layer. We then dwell on the corresponding service providing method in order to implement the protocol-independent transport. Finally, we experimentally evaluate that proposed service providing method can be applied to transmit the HD-SDI signal in the software-defined optical network.

  12. NASA JPL Distributed Systems Technology (DST) Object-Oriented Component Approach for Software Inter-Operability and Reuse

    Science.gov (United States)

    Hall, Laverne; Hung, Chaw-Kwei; Lin, Imin

    2000-01-01

    The purpose of this paper is to provide a description of NASA JPL Distributed Systems Technology (DST) Section's object-oriented component approach to open inter-operable systems software development and software reuse. It will address what is meant by the terminology object component software, give an overview of the component-based development approach and how it relates to infrastructure support of software architectures and promotes reuse, enumerate on the benefits of this approach, and give examples of application prototypes demonstrating its usage and advantages. Utilization of the object-oriented component technology approach for system development and software reuse will apply to several areas within JPL, and possibly across other NASA Centers.

  13. Software development infrastructure for the HYBRID modeling and simulation project

    International Nuclear Information System (INIS)

    Epiney, Aaron S.; Kinoshita, Robert A.; Kim, Jong Suk; Rabiti, Cristian; Greenwood, M. Scott

    2016-01-01

    One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers

  14. Software development infrastructure for the HYBRID modeling and simulation project

    Energy Technology Data Exchange (ETDEWEB)

    Epiney, Aaron S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kinoshita, Robert A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, Jong Suk [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Greenwood, M. Scott [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers

  15. OOI CyberInfrastructure - Next Generation Oceanographic Research

    Science.gov (United States)

    Farcas, C.; Fox, P.; Arrott, M.; Farcas, E.; Klacansky, I.; Krueger, I.; Meisinger, M.; Orcutt, J.

    2008-12-01

    Software has become a key enabling technology for scientific discovery, observation, modeling, and exploitation of natural phenomena. New value emerges from the integration of individual subsystems into networked federations of capabilities exposed to the scientific community. Such data-intensive interoperability networks are crucial for future scientific collaborative research, as they open up new ways of fusing data from different sources and across various domains, and analysis on wide geographic areas. The recently established NSF OOI program, through its CyberInfrastructure component addresses this challenge by providing broad access from sensor networks for data acquisition up to computational grids for massive computations and binding infrastructure facilitating policy management and governance of the emerging system-of-scientific-systems. We provide insight into the integration core of this effort, namely, a hierarchic service-oriented architecture for a robust, performant, and maintainable implementation. We first discuss the relationship between data management and CI crosscutting concerns such as identity management, policy and governance, which define the organizational contexts for data access and usage. Next, we detail critical services including data ingestion, transformation, preservation, inventory, and presentation. To address interoperability issues between data represented in various formats we employ a semantic framework derived from the Earth System Grid technology, a canonical representation for scientific data based on DAP/OPeNDAP, and related data publishers such as ERDDAP. Finally, we briefly present the underlying transport based on a messaging infrastructure over the AMQP protocol, and the preservation based on a distributed file system through SDSC iRODS.

  16. Software Defined Cyberinfrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Foster, Ian; Blaiszik, Ben; Chard, Kyle; Chard, Ryan

    2017-07-17

    Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policies by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.

  17. Maturity Model of Software Product with Educational Maturity Model

    OpenAIRE

    R.Manjula; J.Vaideeswaran

    2011-01-01

    Software product line engineering is an inter-disciplinary concept. It spans the dimensions of business, architecture, process,and the organization. Similarly, Education System engineering is also an inter-disciplinary concept, which spans the dimensions of academic, infrastructure, facilities, administration etc. Some of the potential benefits of this approach includecontinuous improvements in System quality and adhering to global standards. The increasing competency in IT and Educational Se...

  18. Executable research compendia in geoscience research infrastructures

    Science.gov (United States)

    Nüst, Daniel

    2017-04-01

    From generation through analysis and collaboration to communication, scientific research requires the right tools. Scientists create their own software using third party libraries and platforms. Cloud computing, Open Science, public data infrastructures, and Open Source enable scientists with unprecedented opportunites, nowadays often in a field "Computational X" (e.g. computational seismology) or X-informatics (e.g. geoinformatics) [0]. This increases complexity and generates more innovation, e.g. Environmental Research Infrastructures (environmental RIs [1]). Researchers in Computational X write their software relying on both source code (e.g. from https://github.com) and binary libraries (e.g. from package managers such as APT, https://wiki.debian.org/Apt, or CRAN, https://cran.r-project.org/). They download data from domain specific (cf. https://re3data.org) or generic (e.g. https://zenodo.org) data repositories, and deploy computations remotely (e.g. European Open Science Cloud). The results themselves are archived, given persistent identifiers, connected to other works (e.g. using https://orcid.org/), and listed in metadata catalogues. A single researcher, intentionally or not, interacts with all sub-systems of RIs: data acquisition, data access, data processing, data curation, and community support [3]. To preserve computational research [3] proposes the Executable Research Compendium (ERC), a container format closing the gap of dependency preservation by encapsulating the runtime environment. ERCs and RIs can be integrated for different uses: (i) Coherence: ERC services validate completeness, integrity and results (ii) Metadata: ERCs connect the different parts of a piece of research and faciliate discovery (iii) Exchange and Preservation: ERC as usable building blocks are the shared and archived entity (iv) Self-consistency: ERCs remove dependence on ephemeral sources (v) Execution: ERC services create and execute a packaged analysis but integrate with

  19. Making Temporal Search More Central in Spatial Data Infrastructures

    Science.gov (United States)

    Corti, P.; Lewis, B.

    2017-10-01

    A temporally enabled Spatial Data Infrastructure (SDI) is a framework of geospatial data, metadata, users, and tools intended to provide an efficient and flexible way to use spatial information which includes the historical dimension. One of the key software components of an SDI is the catalogue service which is needed to discover, query, and manage the metadata. A search engine is a software system capable of supporting fast and reliable search, which may use any means necessary to get users to the resources they need quickly and efficiently. These techniques may include features such as full text search, natural language processing, weighted results, temporal search based on enrichment, visualization of patterns in distributions of results in time and space using temporal and spatial faceting, and many others. In this paper we will focus on the temporal aspects of search which include temporal enrichment using a time miner - a software engine able to search for date components within a larger block of text, the storage of time ranges in the search engine, handling historical dates, and the use of temporal histograms in the user interface to display the temporal distribution of search results.

  20. Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS

    CERN Document Server

    McKee, S; The ATLAS collaboration; Laurens, P; Severini, H; Wlodek, T; Wolff, S; Zurawski, J

    2012-01-01

    We will present our motivations for deploying and using the perfSONAR-PS Performance Toolkit at ATLAS sites in the United States and describe our experience in using it. This software creates a dedicated monitoring server, capable of collecting and performing a wide range of passive and active network measurements. Each independent instance is managed locally, but able to federate on a global scale; enabling a full view of the network infrastructure that spans domain boundaries. This information, available through web service interfaces, can easily be retrieved to create customized applications. USATLAS has developed a centralized “dashboard” offering network administrators, users, and decision makers the ability to see the performance of the network at a glance. The dashboard framework includes the ability to notify users (alarm) when problems are found, thus allowing rapid response to potential problems and making perfSONAR-PS crucial to the operation of our distributed computing infrastructure.

  1. Cloud Computing in Support of Applied Learning: A Baseline Study of Infrastructure Design at Southern Polytechnic State University

    Science.gov (United States)

    Conn, Samuel S.; Reichgelt, Han

    2013-01-01

    Cloud computing represents an architecture and paradigm of computing designed to deliver infrastructure, platforms, and software as constructible computing resources on demand to networked users. As campuses are challenged to better accommodate academic needs for applications and computing environments, cloud computing can provide an accommodating…

  2. Metodika projektového řízení OpenUP a konfigurace software pro její podporu

    OpenAIRE

    Boukal, Tomáš

    2011-01-01

    The objective of this thesis is to familiarize readers with the project management methodology, specifically more in detail with OpenUP and RUP methodologies, and reveal the potential of software support for the methodology in general, with some specific parts being focused on OpenUP methodology. The benefit of this thesis is to map the current software offerings to support project management, finding the possible combinations of the integration software to optimize the infrastructure to faci...

  3. The Liquid Argon Software Toolkit (LArSoft): Goals, Status and Plan

    Energy Technology Data Exchange (ETDEWEB)

    Pordes, Rush [Fermilab; Snider, Erica [Fermilab

    2016-08-17

    LArSoft is a toolkit that provides a software infrastructure and algorithms for the simulation, reconstruction and analysis of events in Liquid Argon Time Projection Chambers (LArTPCs). It is used by the ArgoNeuT, LArIAT, MicroBooNE, DUNE (including 35ton prototype and ProtoDUNE) and SBND experiments. The LArSoft collaboration provides an environment for the development, use, and sharing of code across experiments. The ultimate goal is to develop fully automatic processes for reconstruction and analysis of LArTPC events. The toolkit is based on the art framework and has a well-defined architecture to interface to other packages, including to GEANT4 and GENIE simulation software and the Pandora software development kit for pattern recognition. It is designed to facilitate and support the evolution of algorithms including their transition to new computing platforms. The development of the toolkit is driven by the scientific stakeholders involved. The core infrastructure includes standard definitions of types and constants, means to input experiment geometries as well as meta and event- data in several formats, and relevant general utilities. Examples of algorithms experiments have contributed to date are: photon-propagation; particle identification; hit finding, track finding and fitting; electromagnetic shower identification and reconstruction. We report on the status of the toolkit and plans for future work.

  4. GeoBolivia the initiator Spatial Data Infrastructure of the Plurinational State of Bolivia's Node

    Science.gov (United States)

    Molina Rodriguez, Raul Fernando; Lesage, Sylvain

    2014-05-01

    Started in 2011, the GeoBolivia project (www.geo.gob.bo)aims at building the Spatial Data Infrastructure of the Plurinational State of Bolivia (IDE-EPB by its Spanish initials), as an effort of the Vice Presidency of the State to give an open access to the public geographic information of Bolivia. The first phase of the project has already been completed. It consisted in implementing an infrastructure and a geoportal for accessing the geographic information through WMS, WFS, WCS and CSW services. The project is currently in its second phase dedicated to decentralizing the structure of IDE-EPB and promoting its use throughout the Bolivian State. The whole platform uses free software and open standards. As a complement, an on-line training module was developed to undertake the transfer of the knowledge the project generated. The main software components used in the SDI are: gvSIG, QGis, uDig as GIS desktop clients; PostGreSQL and PostGIS as geographic database management system; geOrchestra as a framework containing the GeoServer map server, the GeoNetwork catalog server and the OpenLayers and Mapfish GIS webclient; MapServer as a map server for generating OpenStreetMap tiles; Debian as operating system; Apache and Tomcat as web servers. Keywords: SDI, Bolivia, GIS, free software, catalog, gvSIG, QGIS, uDig, geOrchestra, OpenLayers, Mapfish, GeoNetwork, MapServer, GeoServer, OGC, WFS, WMS, WCS, CSW, WMC.

  5. Digital divide, biometeorological data infrastructures and human vulnerability definition

    Science.gov (United States)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2018-05-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  6. Digital divide, biometeorological data infrastructures and human vulnerability definition

    Science.gov (United States)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2017-06-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  7. Digital divide, biometeorological data infrastructures and human vulnerability definition.

    Science.gov (United States)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2018-05-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  8. Armenia - Irrigation Infrastructure

    Data.gov (United States)

    Millennium Challenge Corporation — This study evaluates irrigation infrastructure rehabilitation in Armenia. The study separately examines the impacts of tertiary canals and other large infrastructure...

  9. Software usage in unsupervised digital doorway computing environments in disadvantaged South African communities: Focusing on youthful users

    CSIR Research Space (South Africa)

    Gush, K

    2011-01-01

    Full Text Available Digital Doorways provide computing infrastructure in low-income communities in South Africa. The unsupervised DD terminals offer various software applications, from entertainment through educational resources to research material, encouraging...

  10. Continuous integration and quality control for scientific software

    Science.gov (United States)

    Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.

    2013-08-01

    Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.

  11. A roadmap to continuous integration for ATLAS software development

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00132984; The ATLAS collaboration; Elmsheuser, Johannes; Obreshkov, Emil; Krasznahorkay, Attila

    2017-01-01

    The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million C++ and 1.4 million python lines. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This paper describes the CI ...

  12. A Roadmap to Continuous Integration for ATLAS Software Development

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Obreshkov, Emil; Undrus, Alexander

    2016-01-01

    The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million C++ and 1.4 million python lines. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This presentation describes t...

  13. MobileCoDaC – A transportable control, data acquisition and communication infrastructure for Wendelstein 7-X

    International Nuclear Information System (INIS)

    Hennig, Christine; Bluhm, Torsten; Kühner, Georg; Laqua, Heike; Lewerentz, Marc; Müller, Ina; Pingel, Steffen; Riemann, Heike; Schacht, Jörg; Spring, Anett; Werner, Andreas; Wölk, Andreas

    2014-01-01

    Highlights: • MobileCoDaC is a transportable CoDaC infrastructure for Wendelstein 7-X. • It allows in situ testing and commissioning of components to be used at W7-X by providing W7-X CoDaC infrastructure. • It has been used successfully for test and commissioning of the HEXOS diagnostic at Forschungszentrum Jülich. - Abstract: MobileCoDaC is a test bed allowing in situ testing and commissioning the control and data acquisition of components to be operated at Wendelstein 7-X. It is a minimized replica of the functionality of the complete W7-X CoDaC infrastructure and can be operated independently. MobileCoDaC contains a set of W7-X CoDaC servers, network infrastructure, and accessories for remote access. All hardware is mounted in a single transportable rack system. Moreover, it provides the software infrastructure and user applications for experiment preparation, experiment operation, trouble shooting and experiment data access. MobileCoDaC has been operated successfully for test and commissioning of the control and data acquisition of the HEXOS (high efficiency extreme ultraviolet overview spectrometer) diagnostic at Forschungszentrum Jülich

  14. eCDL integration with commercial skills test information system (CSTIMS)

    Science.gov (United States)

    2012-11-30

    In coordination with the West Virginia Division of Motor Vehicles (WVDMV), the Rahall Transportation Institute (RTI) integrated the eCDL program with the CSTIMS, a software program owned by the American Motor Vehicles Administrators Association (AAMV...

  15. ENES the European Network for Earth System modelling and its infrastructure projects IS-ENES

    Science.gov (United States)

    Guglielmo, Francesca; Joussaume, Sylvie; Parinet, Marie

    2016-04-01

    The scientific community working on climate modelling is organized within the European Network for Earth System modelling (ENES). In the past decade, several European university departments, research centres, meteorological services, computer centres, and industrial partners engaged in the creation of ENES with the purpose of working together and cooperating towards the further development of the network, by signing a Memorandum of Understanding. As of 2015, the consortium counts 47 partners. The climate modelling community, and thus ENES, faces challenges which are both science-driven, i.e. analysing of the full complexity of the Earth System to improve our understanding and prediction of climate changes, and have multi-faceted societal implications, as a better representation of climate change on regional scales leads to improved understanding and prediction of impacts and to the development and provision of climate services. ENES, promoting and endorsing projects and initiatives, helps in developing and evaluating of state-of-the-art climate and Earth system models, facilitates model inter-comparison studies, encourages exchanges of software and model results, and fosters the use of high performance computing facilities dedicated to high-resolution multi-model experiments. ENES brings together public and private partners, integrates countries underrepresented in climate modelling studies, and reaches out to different user communities, thus enhancing European expertise and competitiveness. In this need of sophisticated models, world-class, high-performance computers, and state-of-the-art software solutions to make efficient use of models, data and hardware, a key role is played by the constitution and maintenance of a solid infrastructure, developing and providing services to the different user communities. ENES has investigated the infrastructural needs and has received funding from the EU FP7 program for the IS-ENES (InfraStructure for ENES) phase I and II

  16. Using the National Information Infrastructure for social science, education, and informed decision making

    Energy Technology Data Exchange (ETDEWEB)

    Tonn, B.E.

    1994-01-07

    The United States has aggressively embarked on the challenging task of building a National Information Infrastructure (NII). This infrastructure will have many levels, extending from the building block capital stock that composes the telecommunications system to the multitude of higher tier applications hardware and software tied to this system. This ``White Paper`` presents a vision for a second and third tier national information infrastructure that focuses exclusively on the needs of social science, education, and decision making (NII-SSEDM). NII-SSEDM will provide the necessary data, information, and automated decision support and educational tools needed to help this nation solve its most pressing social problems. The proposed system has five components: `data collection systems; databases; statistical analysis and modeling tools; policy analysis and decision support tools; and materials and software specially designed for education. This paper contains: a vision statement for each component; comments on progress made on each component as of the early 1990s; and specific recommendations on how to achieve the goals described in the vision statements. The white paper also discusses how the NII-SSEDM could be used to address four major social concerns: ensuring economic prosperity; health care; reducing crime and violence; and K-12 education. Examples of near-term and mid-term goals (e.g., pre-and post Year 2000) are presented for consideration. Although the development of NII-SSEDM will require a concerted effort by government, the private sector, schools, and numerous other organizations, the success of NH-SSEDM is predicated upon the identification of an institutional ``champion`` to acquire and husband key resources and provide strong leadership and guidance.

  17. SOA-based RFID public services infrastructure: architecture and its core services

    Institute of Scientific and Technical Information of China (English)

    Zeng Junfang; Li Ran; Luo Jin; Liu Yu

    2009-01-01

    Radio frequency identification (RFID) has prominent advantages compared with other auto-identification technologies. Combining RFID with network technology, physical object tracking and information sharing can possibly be carried out in an innovative way. Regarding open-loop RFID applications, RFID public services infrastructure (PSI) is presented, PSI architecture is designed, and service modules are implemented, and a demonstrative application system, blood management and traceability system, is studied to verify PSI. Experimental results show the feasibility of the proposed architecture and the usability of PSI framework software.

  18. Access control infrastructure for on-demand provisioned virtualised infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; de Laat, C.; Smari, W.W.; Fox, G.C.

    2011-01-01

    Cloud technologies are emerging as a new way of provisioning virtualised computing and infrastructure services on-demand for collaborative projects and groups. Security in provisioning virtual infrastructure services should address two general aspects: supporting secure operation of the provisioning

  19. Sustainable Water Infrastructure

    Science.gov (United States)

    Resources for state and local environmental and public health officials, and water, infrastructure and utility professionals to learn about sustainable water infrastructure, sustainable water and energy practices, and their role.

  20. Growing the Blockchain information infrastructure

    DEFF Research Database (Denmark)

    Jabbar, Karim; Bjørn, Pernille

    2017-01-01

    In this paper, we present ethnographic data that unpacks the everyday work of some of the many infrastructuring agents who contribute to creating, sustaining and growing the Blockchain information infrastructure. We argue that this infrastructuring work takes the form of entrepreneurial actions......, which are self-initiated and primarily directed at sustaining or increasing the initiator’s stake in the emerging information infrastructure. These entrepreneurial actions wrestle against the affordances of the installed base of the Blockchain infrastructure, and take the shape of engaging...... or circumventing activities. These activities purposefully aim at either influencing or working around the enablers and constraints afforded by the Blockchain information infrastructure, as its installed base is gaining inertia. This study contributes to our understanding of the purpose of infrastructuring, seen...

  1. INFRASTRUCTURE

    CERN Document Server

    A.Gaddi

    2011-01-01

    Between the end of March to June 2011, there has been no detector downtime during proton fills due to CMS Infrastructures failures. This exceptional performance is a clear sign of the high quality work done by the CMS Infrastructures unit and its supporting teams. Powering infrastructure At the end of March, the EN/EL group observed a problem with the CMS 48 V system. The problem was a lack of isolation between the negative (return) terminal and earth. Although at that moment we were not seeing any loss of functionality, in the long term it would have led to severe disruption to the CMS power system. The 48 V system is critical to the operation of CMS: in addition to feeding the anti-panic lights, essential for the safety of the underground areas, it powers all the PLCs (Twidos) that control AC power to the racks and front-end electronics of CMS. A failure of the 48 V system would bring down the whole detector and lead to evacuation of the cavern. EN/EL technicians have made an accurate search of the fault, ...

  2. INFRASTRUCTURE

    CERN Multimedia

    A. Gaddi and P. Tropea

    2011-01-01

    Most of the work relating to Infrastructure has been concentrated in the new CSC and RPC manufactory at building 904, on the Prevessin site. Brand new gas distribution, powering and HVAC infrastructures are being deployed and the production of the first CSC chambers has started. Other activities at the CMS site concern the installation of a new small crane bridge in the Cooling technical room in USC55, in order to facilitate the intervention of the maintenance team in case of major failures of the chilled water pumping units. The laser barrack in USC55 has been also the object of a study, requested by the ECAL community, for the new laser system that shall be delivered in few months. In addition, ordinary maintenance works have been performed during the short machine stops on all the main infrastructures at Point 5 and in preparation to the Year-End Technical Stop (YETS), when most of the systems will be carefully inspected in order to ensure a smooth running through the crucial year 2012. After the incide...

  3. INFRASTRUCTURE

    CERN Multimedia

    A. Gaddi and P. Tropea

    2012-01-01

    The CMS Infrastructures teams are preparing for the LS1 activities. A long list of maintenance, consolidation and upgrade projects for CMS Infrastructures is on the table and is being discussed among Technical Coordination and sub-detector representatives. Apart from the activities concerning the cooling infrastructures (see below), two main projects have started: the refurbishment of the SX5 building, from storage area to RP storage and Muon stations laboratory; and the procurement of a new dry-gas (nitrogen and dry air) plant for inner detector flushing. We briefly present here the work done on the first item, leaving the second one for the next CMS Bulletin issue. The SX5 building is entering its third era, from main assembly building for CMS from 2000 to 2007, to storage building from 2008 to 2012, to RP storage and Muon laboratory during LS1 and beyond. A wall of concrete blocks has been erected to limit the RP zone, while the rest of the surface has been split between the ME1/1 and the CSC/DT laborat...

  4. Measuring CMS Software Performance in the first years of LHC collisions

    CERN Document Server

    Benelli, Gabriele; Pfeiffer, Andreas; Piparo, Danilo; Zemleris, Vidmantas

    2011-01-01

    The CMSSW software framework is a complex project enabling the CMS collaboration to investigate the fast growing LHC collision data sample. A software performance suite of tools has been developed and integrated in CMSSW to keep track of cpu time, memory footprint and event size on disk. These three metrics are key constraints in software development in order to meet the computing requirements used in the planning and management of the CMS computing infrastructure. The performance suite allows the measurement and tracking of the performance across the framework, publishing the results in a dedicated database. A web application makes the results easily accessible to software release managers allowing for automatic integration in CMSSW release cycle quality assurance. The performance suite is also available to individual developers for dedicated code optimization and the web application allows historic regression and comparisons across releases. The performance suite tools and the performance of the CMSSW frame...

  5. Failure to adapt infrastructure: is legal liability lurking for infrastructure stakeholders

    International Nuclear Information System (INIS)

    Gherbaz, S.

    2009-01-01

    'Full text:' Very little attention has been paid to potential legal liability for failing to adapt infrastructure to climate change-related risk. Amendments to laws, building codes and standards to take into account the potential impact of climate change on infrastructure assets are still at least some time away. Notwithstanding that amendments are still some time away, there is a real risk to infrastructure stakeholders for failing to adapt. The legal framework in Canada currently permits a court, in the right circumstances, to find certain infrastructure stakeholders legally liable for personal injury and property damage suffered by third parties as a result of climate change effects. This presentation will focus on legal liability of owners (governmental and private sector), engineers, architects and contractors for failing to adapt infrastructure assets to climate change risk. It will answer commonly asked questions such as: Can I avoid liability by complying with existing laws, codes and standards? Do engineers and architects have a duty to warn owners that existing laws, codes and standards do not, in certain circumstances, adequately take into account the impact of climate change-related risks on an infrastructure asset? And do professional liability insurance policies commonly maintained by architects, engineers and other design professionals provide coverage for a design professional's failure to take into account climate change-related risks?. (author)

  6. Scaling Agile Infrastructure to People

    CERN Document Server

    Jones, B; Traylen, S; Arias, N Barrientos

    2015-01-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow ...

  7. Scaling Agile Infrastructure to People

    Science.gov (United States)

    Jones, B.; McCance, G.; Traylen, S.; Barrientos Arias, N.

    2015-12-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.

  8. Energy consumption in communication infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Dittmann, L.

    2012-11-15

    Despite communication infrastructures (excluding computer and storage center) are ''only'' consuming 2-4% of the global power usage, the concern arise from the growth rate of around 40%. Unless action is taken the power provided to operate the Internet, the cellular mobile network, the WiFi hotspots will be so significant that usage restrictions might be applied - and economic growth limited. The evolutionary and the disruptive approach is not a choice as the implementation of the disruptive approach has a timeline of at least 10 years and the evolutionary approach is unlikely to cope with demand growth in a longer perspective. A more intensive use of optical technology is currently the best solution for the long term future but requires a complete restructuring of the way networks are researched and implemented as optics are unlikely to provide the same flexibility as the electronic/software solution used in current networks. (Author)

  9. A Relational Database Model for Managing Accelerator Control System Software at Jefferson Lab

    International Nuclear Information System (INIS)

    Sally Schaffner; Theodore Larrieu

    2001-01-01

    The operations software group at the Thomas Jefferson National Accelerator Facility faces a number of challenges common to facilities which manage a large body of software developed in-house. Developers include members of the software group, operators, hardware engineers and accelerator physicists.One management problem has been ensuring that all software has an identified owner who is still working at the lab. In some cases, locating source code for ''orphaned'' software has also proven to be difficult. Other challenges include ensuring that working versions of all operational software are available, testing changes to operational software without impacting operations, upgrading infrastructure software (OS, compilers, interpreters, commercial packages, share/freeware, etc), ensuring that appropriate documentation is available and up to date, underutilization of code reuse, input/output file management,and determining what other software will break if a software package is upgraded. This paper will describe a relational database model which has been developed to track this type of information and make it available to managers and developers.The model also provides a foundation for developing productivity-enhancing tools for automated building, versioning, and installation of software. This work was supported by the U.S. DOE contract No. DE-AC05-84ER40150

  10. LCA as a Tool to Evaluate Green Infrastructure's Environmental Performance

    Science.gov (United States)

    Catalano De Sousa, M.; Erispaha, A.; Spatari, S.; Montalto, F.

    2011-12-01

    Decentralized approaches to managing urban stormwater through use of green infrastructure (GI) often lead to system-wide efficiency gains within the urban watershed's energy supply system. These efficiencies lead to direct greenhouse gas (GHG) emissions savings, and also restore some ecosystem functions within the urban landscape. We developed a consequential life cycle assessment (LCA) model to estimate the life cycle energy, global warming potential (GWP), and payback times for each if GI were applied within a select neighborhood in New York City. We applied the SIMAPRO LCA software and the economic input-output LCA (EIO-LCA) tool developed by Carnegie Mellon University. The results showed that for a new intersection installation highlighted in this study a conventional infrastructure construction would emit and use approximately 3 times more for both CO2 and energy than a design using GI. Two GI benefits were analyzed with regards to retrofitting the existing intersection. The first was related to the savings in energy and CO2 at the Waste Water Treatment Plant via runoff reduction accrued from GI use. The second benefit was related to the avoided environmental costs associated with an additional new grey infrastructure installation needed to prevent CSO in case of no GI implementation. The first benefit indicated a high payback time for a GI installation in terms of CO2 and energy demand (80 and 90 years respectively) and suggest a slow energy and carbon recovery time. However, concerning to the second benefit, GI proved to be a sustainable alternative considering the high CO2 releases (429 MTE) and energy demand (5.5 TJ) associated with a grey infrastructure construction.

  11. Web-site of the UGKK. The core of national spatial infrastructure

    International Nuclear Information System (INIS)

    Lacena, M.; Klobusiak, M.

    2005-01-01

    Geodetic and Cartographic Institute Bratislava (GKU) as an executive organization of government department Geodesy, Cartography and Cadastre Authority of the Slovak Republic (Urad geodezie, kartografie a katastra na Slovensku UGKK SR) is a provider and administrator of geodetic fundamentals and basic database of reference data of GIS. It creates one of most important elements of space data infrastructure of the Slovak Republic. The Open Source software UMN MapServer was selected for creating of web-application. The web site of the UGKK SR, its structure, services and perspective are discussed

  12. R and D non-destructive damage monitoring and diagnosing system for civil infrastructures

    International Nuclear Information System (INIS)

    Ren Weixin; Abu Bakar Mohamad Diah; Cheng Hao

    1998-01-01

    Since civil infrastructures serve as the underpinnings of our highly industrialized society, and much of them are now decaying, it is the time to consider how to maintain these widely spread infrastructures in order to prevent potential catastrophic events. Changes in use and the need to maintain an ageing system require improvements in instrumentation for sensing and recording, data acquisition for diagnosing the possible damage, and algorithm for identifying and monitoring the changes in structural characteristics. Researching and developing a real-time, in-serve health detection and monitoring system has drawn a worldwide attention recently for various types of structures. The paper conceives an integrated non-destructive damage monitoring and diagnosing system for civil infrastructures. The system is a high technology and high-commercialised industrial integrated product involved in research and development. The research activities of the system cover three core parts: structural modelling, structural system identification and damage criterion establishment. The development activities of the system include experimental measurements, data acquisition and processing, instrumentation set-up, computer visualisation, and software development. The state-of -the art theories and practices are systematically merged and integrated in the development of the system, and the system will be verified through the real world application for civil infrastructures. Our research results on the damage criterion based on the changes in structural dynamic properties are also reported in the paper. (Author)

  13. The improvement of reading skills of L1 and ESL children using a Response to Intervention (RtI) Model.

    Science.gov (United States)

    Lipka, Orly; Siegel, Linda S

    2010-11-01

    This study examined the development of literacy skills in children in a district that used a Response to Intervention (RTI) model. The district included children whose first language was English and children who were learning English as a second language (ESL). Tasks measuring phonological awareness, lexical access, and syntactic awareness were administered when the children entered school in kindergarten at age 5. Reading, phonological processing, syntactic awareness, memory, and spelling were administered in grade 7. When the children entered school, significant numbers of them were at risk for literacy difficulties. After systematic instruction and annual monitoring of skills, their reading abilities improved to the extent that only a very small percentage had reading difficulties. The results demonstrated that early identification and intervention and frequent monitoring of basic skills can significantly reduce the incidence of reading problems in both the ESL and language majority children.

  14. The ATLAS High Level Trigger Infrastructure, Performance and Future Developments

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 multi-core processing nodes that will be extended incrementally following the increasing luminosity of the LHC to about 2000 nodes depending on the evolution of the processor technology. Due to the complexity and similarity of the algorithms a large fraction of the software is shared between the online and offline event reconstruction. The HLT Infrastructure serves as the interface between the two domains and provides common services for the trigger algorithms. The consequences of this design choice will be discussed and experiences from the operation of the ATLAS HLT during cosmic ray data taking and first beam in 2008 will be presented. Since the event processing time at the HL...

  15. An Empirical Evaluation of an Activity-Based Infrastructure for Supporting Cooperation in Software Engineering

    DEFF Research Database (Denmark)

    Tell, Paolo; Babar, Muhammad Ali

    2016-01-01

    Software engineering (SE) is predominantly a team effort that needs close cooperation among several people who may be geographically distributed. It has been recognized that appropriate tool support is a prerequisite to improve cooperation within SE teams. In an effort to contribute to this line...

  16. MFC Communications Infrastructure Study

    Energy Technology Data Exchange (ETDEWEB)

    Michael Cannon; Terry Barney; Gary Cook; George Danklefsen, Jr.; Paul Fairbourn; Susan Gihring; Lisa Stearns

    2012-01-01

    Unprecedented growth of required telecommunications services and telecommunications applications change the way the INL does business today. High speed connectivity compiled with a high demand for telephony and network services requires a robust communications infrastructure.   The current state of the MFC communication infrastructure limits growth opportunities of current and future communication infrastructure services. This limitation is largely due to equipment capacity issues, aging cabling infrastructure (external/internal fiber and copper cable) and inadequate space for telecommunication equipment. While some communication infrastructure improvements have been implemented over time projects, it has been completed without a clear overall plan and technology standard.   This document identifies critical deficiencies with the current state of the communication infrastructure in operation at the MFC facilities and provides an analysis to identify needs and deficiencies to be addressed in order to achieve target architectural standards as defined in STD-170. The intent of STD-170 is to provide a robust, flexible, long-term solution to make communications capabilities align with the INL mission and fit the various programmatic growth and expansion needs.

  17. GSIMF: a web service based software and database management system for the next generation grids

    International Nuclear Information System (INIS)

    Wang, N; Ananthan, B; Gieraltowski, G; May, E; Vaniachine, A

    2008-01-01

    To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. Our Grid Software Installation Management Framework (GSIMF) is a set of Grid Services that has been developed for managing versioned and interdependent software applications and file-based databases over the Grid infrastructure. This set of Grid services provide a mechanism to install software packages on distributed Grid computing elements, thus automating the software and database installation management process on behalf of the users. This enables users to remotely install programs and tap into the computing power provided by Grids

  18. Multimodality image registration with software: state-of-the-art

    International Nuclear Information System (INIS)

    Slomka, Piotr J.; Baum, Richard P.

    2009-01-01

    Multimodality image integration of functional and anatomical data can be performed by means of dedicated hybrid imaging systems or by software image co-registration techniques. Hybrid positron emission tomography (PET)/computed tomography (CT) systems have found wide acceptance in oncological imaging, while software registration techniques have a significant role in patient-specific, cost-effective, and radiation dose-effective application of integrated imaging. Software techniques allow accurate (2-3 mm) rigid image registration of brain PET with CT and MRI. Nonlinear techniques are used in whole-body image registration, and recent developments allow for significantly accelerated computing times. Nonlinear software registration of PET with CT or MRI is required for multimodality radiation planning. Difficulties remain in the validation of nonlinear registration of soft tissue organs. The utilization of software-based multimodality image integration in a clinical environment is sometimes hindered by the lack of appropriate picture archiving and communication systems (PACS) infrastructure needed to efficiently and automatically integrate all available images into one common database. In cardiology applications, multimodality PET/single photon emission computed tomography and coronary CT angiography imaging is typically not required unless the results of one of the tests are equivocal. Software image registration is likely to be used in a complementary fashion with hybrid PET/CT or PET/magnetic resonance imaging systems. Software registration of stand-alone scans ''paved the way'' for the clinical application of hybrid scanners, demonstrating practical benefits of image integration before the hybrid dual-modality devices were available. (orig.)

  19. Infrastructure: concept, types and value

    Directory of Open Access Journals (Sweden)

    Alexander E. Lantsov

    2013-01-01

    Full Text Available Researches of influence of infrastructure on the economic growth and development of the countries gained currency. However the majority of authors drop the problem of definition of accurate concept of studied object and its criteria out. In the given article various approaches in the definition of «infrastructure» concept, criterion and the characteristics of infrastructure distinguishing it from other capital assets are presented. Such types of infrastructure, as personal, institutional, material, production, social, etc. are considered. Author’s definition of infrastructure is given.

  20. Analyzing water/wastewater infrastructure interdependencies

    International Nuclear Information System (INIS)

    Gillette, J. L.; Fisher, R. E.; Peerenboom, J. P.; Whitfield, R. G.

    2002-01-01

    This paper describes four general categories of infrastructure interdependencies (physical, cyber, geographic, and logical) as they apply to the water/wastewater infrastructure, and provides an overview of one of the analytic approaches and tools used by Argonne National Laboratory to evaluate interdependencies. Also discussed are the dimensions of infrastructure interdependency that create spatial, temporal, and system representation complexities that make analyzing the water/wastewater infrastructure particularly challenging. An analytical model developed to incorporate the impacts of interdependencies on infrastructure repair times is briefly addressed

  1. Regulation of gas infrastructure expansion

    International Nuclear Information System (INIS)

    De Joode, J.

    2012-01-01

    The topic of this dissertation is the regulation of gas infrastructure expansion in the European Union (EU). While the gas market has been liberalised, the gas infrastructure has largely remained in the regulated domain. However, not necessarily all gas infrastructure facilities - such as gas storage facilities, LNG import terminals and certain gas transmission pipelines - need to be regulated, as there may be scope for competition. In practice, the choice of regulation of gas infrastructure expansion varies among different types of gas infrastructure facilities and across EU Member States. Based on a review of economic literature and on a series of in-depth case studies, this study explains these differences in choices of regulation from differences in policy objectives, differences in local circumstances and differences in the intrinsic characteristics of the infrastructure projects. An important conclusion is that there is potential for a larger role for competition in gas infrastructure expansion.

  2. The ALICE Software Release Validation cluster

    International Nuclear Information System (INIS)

    Berzano, D; Krzewicki, M

    2015-01-01

    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future. (paper)

  3. Department of Energy's Virtual Lab Infrastructure for Integrated Earth System Science Data

    Science.gov (United States)

    Williams, D. N.; Palanisamy, G.; Shipman, G.; Boden, T.; Voyles, J.

    2014-12-01

    The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) Climate and Environmental Sciences Division (CESD) produces a diversity of data, information, software, and model codes across its research and informatics programs and facilities. This information includes raw and reduced observational and instrumentation data, model codes, model-generated results, and integrated data products. Currently, most of this data and information are prepared and shared for program specific activities, corresponding to CESD organization research. A major challenge facing BER CESD is how best to inventory, integrate, and deliver these vast and diverse resources for the purpose of accelerating Earth system science research. This talk provides a concept for a CESD Integrated Data Ecosystem and an initial roadmap for its implementation to address this integration challenge in the "Big Data" domain. Towards this end, a new BER Virtual Laboratory Infrastructure will be presented, which will include services and software connecting the heterogeneous CESD data holdings, and constructed with open source software based on industry standards, protocols, and state-of-the-art technology.

  4. Building safeguards infrastructure

    International Nuclear Information System (INIS)

    Stevens, Rebecca S.; McClelland-Kerr, John

    2009-01-01

    Much has been written in recent years about the nuclear renaissance - the rebirth of nuclear power as a clean and safe source of electricity around the world. Those who question the nuclear renaissance often cite the risk of proliferation, accidents or an attack on a facility as concerns, all of which merit serious consideration. The integration of these three areas - sometimes referred to as 3S, for safety, security and safeguards - is essential to supporting the growth of nuclear power, and the infrastructure that supports them should be strengthened. The focus of this paper will be on the role safeguards plays in the 3S concept and how to support the development of the infrastructure necessary to support safeguards. The objective of this paper has been to provide a working definition of safeguards infrastructure, and to discuss xamples of how building safeguards infrastructure is presented in several models. The guidelines outlined in the milestones document provide a clear path for establishing both the safeguards and the related infrastructures needed to support the development of nuclear power. The model employed by the INSEP program of engaging with partner states on safeguards-related topics that are of current interest to the level of nuclear development in that state provides another way of approaching the concept of building safeguards infrastructure. The Next Generation Safeguards Initiative is yet another approach that underscored five principal areas for growth, and the United States commitment to working with partners to promote this growth both at home and abroad.

  5. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    International Nuclear Information System (INIS)

    Babik, Marian; Hook, Nicholas; Lansdale, Thomas Hector; Lenkes, Daniel; Siket, Miroslav; Waldron, Denis; Fedorko, Ivan

    2011-01-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  6. An Extensible Open-Source Compiler Infrastructure for Testing

    Energy Technology Data Exchange (ETDEWEB)

    Quinlan, D; Ur, S; Vuduc, R

    2005-12-09

    Testing forms a critical part of the development process for large-scale software, and there is growing need for automated tools that can read, represent, analyze, and transform the application's source code to help carry out testing tasks. However, the support required to compile applications written in common general purpose languages is generally inaccessible to the testing research community. In this paper, we report on an extensible, open-source compiler infrastructure called ROSE, which is currently in development at Lawrence Livermore National Laboratory. ROSE specifically targets developers who wish to build source-based tools that implement customized analyses and optimizations for large-scale C, C++, and Fortran90 scientific computing applications (on the order of a million lines of code or more). However, much of this infrastructure can also be used to address problems in testing, and ROSE is by design broadly accessible to those without a formal compiler background. This paper details the interactions between testing of applications and the ways in which compiler technology can aid in the understanding of those applications. We emphasize the particular aspects of ROSE, such as support for the general analysis of whole programs, that are particularly well-suited to the testing research community and the scale of the problems that community solves.

  7. Trends and Potentials of the Smart Grid Infrastructure: From ICT Sub-System to SDN-Enabled Smart Grid Architecture

    Directory of Open Access Journals (Sweden)

    Jaebeom Kim

    2015-10-01

    Full Text Available Context and situational awareness are key features and trends of the smart grid and enable adaptable, flexible and extendable smart grid services. However, the traditional hardware-dependent communication infrastructure is not designed to identify the flow and context of data, and it focuses only on packet forwarding using a pre-defined network configuration profile. Thus, the current network infrastructure may not dynamically adapt the various business models and services of the smart grid system. To solve this problem, software-defined networking (SDN is being considered in the smart grid, but the design, architecture and system model need to be optimized for the smart grid environment. In this paper, we investigate the state-of-the-art smart grid information subsystem, communication infrastructure and its emerging trends and potentials, called an SDN-enabled smart grid. We present an abstract business model, candidate SDN applications and common architecture of the SDN-enabled smart grid. Further, we compare recent studies into the SDN-enabled smart grid depending on its service functionalities, and we describe further challenges of the SDN-enabled smart grid network infrastructure.

  8. Infrastructure needs for waste management

    International Nuclear Information System (INIS)

    Takahashi, M.

    2001-01-01

    National infrastructures are needed to safely and economically manage radioactive wastes. Considerable experience has been accumulated in industrialized countries for predisposal management of radioactive wastes, and legal, regulatory and technical infrastructures are in place. Drawing on this experience, international organizations can assist in transferring this knowledge to developing countries to build their waste management infrastructures. Infrastructure needs for disposal of long lived radioactive waste are more complex, due to the long time scale that must be considered. Challenges and infrastructure needs, particularly for countries developing geologic repositories for disposal of high level wastes, are discussed in this paper. (author)

  9. Infrastructure for Rapid Development of Java GUI Programs

    Science.gov (United States)

    Jones, Jeremy; Hostetter, Carl F.; Wheeler, Philip

    2006-01-01

    The Java Application Shell (JAS) is a software framework that accelerates the development of Java graphical-user-interface (GUI) application programs by enabling the reuse of common, proven GUI elements, as distinguished from writing custom code for GUI elements. JAS is a software infrastructure upon which Java interactive application programs and graphical user interfaces (GUIs) for those programs can be built as sets of plug-ins. JAS provides an application- programming interface that is extensible by application-specific plugins that describe and encapsulate both specifications of a GUI and application-specific functionality tied to the specified GUI elements. The desired GUI elements are specified in Extensible Markup Language (XML) descriptions instead of in compiled code. JAS reads and interprets these descriptions, then creates and configures a corresponding GUI from a standard set of generic, reusable GUI elements. These elements are then attached (again, according to the XML descriptions) to application-specific compiled code and scripts. An application program constructed by use of JAS as its core can be extended by writing new plug-ins and replacing existing plug-ins. Thus, JAS solves many problems that Java programmers generally solve anew for each project, thereby reducing development and testing time.

  10. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    Science.gov (United States)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  11. Pro Linux system administration learn to build systems for your business using free and open source software

    CERN Document Server

    Matotek, Dennis; Lieverdink, Peter

    2017-01-01

    This book aims to ease the entry of businesses to the world of zero-cost software running on Linux. It takes a layered, component-based approach to open source business systems, while training system administrators as the builders of business infrastructure.

  12. Climate Science's Globally Distributed Infrastructure

    Science.gov (United States)

    Williams, D. N.

    2016-12-01

    The Earth System Grid Federation (ESGF) is primarily funded by the Department of Energy's (DOE's) Office of Science (the Office of Biological and Environmental Research [BER] Climate Data Informatics Program and the Office of Advanced Scientific Computing Research Next Generation Network for Science Program), the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF), the European Infrastructure for the European Network for Earth System Modeling (IS-ENES), and the Australian National University (ANU). Support also comes from other U.S. federal and international agencies. The federation works across multiple worldwide data centers and spans seven international network organizations to provide users with the ability to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a series of geographically distributed peer nodes that are independently administered and united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP; output used by the Intergovernmental Panel on Climate Change assessment reports), multiple model intercomparison projects (MIPs; endorsed by the World Climate Research Programme [WCRP]), and the Accelerated Climate Modeling for Energy (ACME; ESGF is included in the overarching ACME workflow process to store model output). ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs the global climate science community. Data served by ESGF includes not only model output but also observational data from satellites and instruments, reanalysis, and generated images.

  13. Perspectives in understanding open access to research data - infrastructure and technology challenges

    Science.gov (United States)

    Bigagli, Lorenzo; Sondervan, Jeroen

    2014-05-01

    The Policy RECommendations for Open Access to Research Data in Europe (RECODE) project, started in February 2013 with a duration of two years, has the objective to identify a series of targeted and over-arching policy recommendations for Open Access to European research data, based on existing good practice and addressing such hindering factors as stakeholder fragmentation, technical and infrastructural issues, ethical and legal issues, and financial and institutional policies. In this work we focus on the technical and infrastructural aspect, where by "infrastructure" we mean the technological assets (hardware and software), the human resources, and all the policies, processes, procedures and training for managing and supporting its continuous operation and evolution. The context targeted by RECODE includes heterogeneous networks, initiatives, projects and communities that are fragmented by discipline, geography, stakeholder category (publishers, academics, repositories, etc.) as well as other boundaries. Many of these organizations are already addressing key technical and infrastructural barriers to Open Access to research data. Such barriers may include: lack of automatic mechanisms for policy enforcement, lack of metadata and data models supporting open access, obsolescence of infrastructures, scarce awareness about new technological solutions, lack of training and/or expertise on IT and semantics aspects. However, these organizations are often heterogeneous and fragmented by discipline, geography, stakeholder category (publishers, academics, repositories, etc.) as well as other boundaries, and often work in isolation, or with limited contact with one another. RECODE has addressed these challenges, and the possible solutions to mitigate them, engaging all the identified stakeholders in a number of ways, including an online questionnaire, case studies interviews, literature review, a workshop. The conclusions have been validated by the RECODE Advisory Board and

  14. 6. The Global Infrastructure Development Sector

    OpenAIRE

    2017-01-01

    Studies of global infrastructure development often omit a perspective on the infrastructure development industry itself. Infrastructure development is the industry that turns infrastructure ideas into physical reality — contractors, engineering firms, hardware suppliers, and so on. Consequently, market penetration, cost functions, scale and scope economies, and other competitive variables that characterize infrastructure development have a direct effect on its economics. Vibrant competition a...

  15. Cyber and physical infrastructure interdependencies.

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, Laurence R.; Kelic, Andjelka; Warren, Drake E.

    2008-09-01

    The goal of the work discussed in this document is to understand the risk to the nation of cyber attacks on critical infrastructures. The large body of research results on cyber attacks against physical infrastructure vulnerabilities has not resulted in clear understanding of the cascading effects a cyber-caused disruption can have on critical national infrastructures and the ability of these affected infrastructures to deliver services. This document discusses current research and methodologies aimed at assessing the translation of a cyber-based effect into a physical disruption of infrastructure and thence into quantification of the economic consequences of the resultant disruption and damage. The document discusses the deficiencies of the existing methods in correlating cyber attacks with physical consequences. The document then outlines a research plan to correct those deficiencies. When completed, the research plan will result in a fully supported methodology to quantify the economic consequences of events that begin with cyber effects, cascade into other physical infrastructure impacts, and result in degradation of the critical infrastructure's ability to deliver services and products. This methodology enables quantification of the risks to national critical infrastructure of cyber threats. The work addresses the electric power sector as an example of how the methodology can be applied.

  16. E-Infrastructure Concertation Meeting

    CERN Multimedia

    Katarina Anthony

    2010-01-01

    The 8th e-Infrastructure Concertation Meeting was held in the Globe from 4 to 5 November to discuss the development of Europe’s distributed computing and storage resources.   Project leaders attend the E-Concertation Meeting at the Globe on 5 November 2010. © Corentin Chevalier E-Infrastructures have become an indispensable tool for scientific research, linking researchers to virtually unlimited e-resources like the grid. The recent e-Infrastructure Concertation Meeting brought together e-Science project leaders to discuss the development of this tool in the European context. The meeting was part of an ongoing initiative to develop a world-class e-infrastructure resource that would establish European leadership in e-Science. The e-Infrastructure Concertation Meeting was organised by the Commission Services (EC) with the support of e-ScienceTalk. “The Concertation meeting at CERN has been a great opportunity for e-ScienceTalk to meet many of the 38 new proje...

  17. The future of infrastructure security :

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Pablo; Turnley, Jessica Glicken; Parrott, Lori K.

    2013-05-01

    Sandia National Laboratories hosted a workshop on the future of infrastructure security on February 27-28, 2013, in Albuquerque, NM. The 17 participants came from backgrounds as diverse as federal policy, the insurance industry, infrastructure management, and technology development. The purpose of the workshop was to surface key issues, identify directions forward, and lay groundwork for cross-sectoral and cross-disciplinary collaborations. The workshop addressed issues such as the problem space (what is included in infrastructure problems?), the general types of threats to infrastructure (such as acute or chronic, system-inherent or exogenously imposed) and definitions of secure and resilient infrastructures. The workshop concluded with a consideration of stakeholders and players in the infrastructure world, and identification of specific activities that could be undertaken by the Department of Homeland Security (DHS) and other players.

  18. Use of Docker for deployment and testing of astronomy software

    Science.gov (United States)

    Morris, D.; Voutsinas, S.; Hambly, N. C.; Mann, R. G.

    2017-07-01

    We describe preliminary investigations of using Docker for the deployment and testing of astronomy software. Docker is a relatively new containerization technology that is developing rapidly and being adopted across a range of domains. It is based upon virtualization at operating system level, which presents many advantages in comparison to the more traditional hardware virtualization that underpins most cloud computing infrastructure today. A particular strength of Docker is its simple format for describing and managing software containers, which has benefits for software developers, system administrators and end users. We report on our experiences from two projects - a simple activity to demonstrate how Docker works, and a more elaborate set of services that demonstrates more of its capabilities and what they can achieve within an astronomical context - and include an account of how we solved problems through interaction with Docker's very active open source development community, which is currently the key to the most effective use of this rapidly-changing technology.

  19. User and Machine Authentication and Authorization Infrastructure for Distributed Wireless Sensor Network Testbeds

    Directory of Open Access Journals (Sweden)

    Gerald Wagenknecht

    2013-03-01

    Full Text Available The intention of an authentication and authorization infrastructure (AAI is to simplify and unify access to different web resources. With a single login, a user can access web applications at multiple organizations. The Shibboleth authentication and authorization infrastructure is a standards-based, open source software package for web single sign-on (SSO across or within organizational boundaries. It allows service providers to make fine-grained authorization decisions for individual access of protected online resources. The Shibboleth system is a widely used AAI, but only supports protection of browser-based web resources. We have implemented a Shibboleth AAI extension to protect web services using Simple Object Access Protocol (SOAP. Besides user authentication for browser-based web resources, this extension also provides user and machine authentication for web service-based resources. Although implemented for a Shibboleth AAI, the architecture can be easily adapted to other AAIs.

  20. Central Region Green Infrastructure

    Data.gov (United States)

    Minnesota Department of Natural Resources — This Green Infrastructure data is comprised of 3 similar ecological corridor data layers ? Metro Conservation Corridors, green infrastructure analysis in counties...

  1. A Grid-Based Cyber Infrastructure for High Performance Chemical Dynamics Simulations

    Directory of Open Access Journals (Sweden)

    Khadka Prashant

    2008-10-01

    Full Text Available Chemical dynamics simulation is an effective means to study atomic level motions of molecules, collections of molecules, liquids, surfaces, interfaces of materials, and chemical reactions. To make chemical dynamics simulations globally accessible to a broad range of users, recently a cyber infrastructure was developed that provides an online portal to VENUS, a popular chemical dynamics simulation program package, to allow people to submit simulation jobs that will be executed on the web server machine. In this paper, we report new developments of the cyber infrastructure for the improvement of its quality of service by dispatching the submitted simulations jobs from the web server machine onto a cluster of workstations for execution, and by adding an animation tool, which is optimized for animating the simulation results. The separation of the server machine from the simulation-running machine improves the service quality by increasing the capacity to serve more requests simultaneously with even reduced web response time, and allows the execution of large scale, time-consuming simulation jobs on the powerful workstation cluster. With the addition of an animation tool, the cyber infrastructure automatically converts, upon the selection of the user, some simulation results into an animation file that can be viewed on usual web browsers without requiring installation of any special software on the user computer. Since animation is essential for understanding the results of chemical dynamics simulations, this animation capacity provides a better way for understanding simulation details of the chemical dynamics. By combining computing resources at locations under different administrative controls, this cyber infrastructure constitutes a grid environment providing physically and administratively distributed functionalities through a single easy-to-use online portal

  2. 75 FR 67989 - Agency Information Collection Activities: Office of Infrastructure Protection; Infrastructure...

    Science.gov (United States)

    2010-11-04

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2010-0084] Agency Information Collection Activities: Office of Infrastructure Protection; Infrastructure Protection Stakeholder Input Project--Generic... comments; New Information Collection Request: 1670-NEW. SUMMARY: The Department of Homeland Security...

  3. Scientific Software - the role of best practices and recommendations

    Science.gov (United States)

    Fritzsch, Bernadette; Bernstein, Erik; Castell, Wolfgang zu; Diesmann, Markus; Haas, Holger; Hammitzsch, Martin; Konrad, Uwe; Lähnemann, David; McHardy, Alice; Pampel, Heinz; Scheliga, Kaja; Schreiber, Andreas; Steglich, Dirk

    2017-04-01

    In Geosciences - like in most other communities - scientific work strongly depends on software. For big data analysis, existing (closed or open source) program packages are often mixed with newly developed codes. Different versions of software components and varying configurations can influence the result of data analysis. This often makes reproducibility of results and reuse of codes very difficult. Policies for publication and documentation of used and newly developed software, along with best practices, can help tackle this problem. Within the Helmholtz Association a Task Group "Access to and Re-use of scientific software" was implemented by the Open Science Working Group in 2016. The aim of the Task Group is to foster the discussion about scientific software in the Open Science context and to formulate recommendations for the production and publication of scientific software, ensuring open access to it. As a first step, a workshop gathered interested scientists from institutions across Germany. The workshop brought together various existing initiatives from different scientific communities to analyse current problems, share established best practices and come up with possible solutions. The subjects in the working groups covered a broad range of themes, including technical infrastructures, standards and quality assurance, citation of software and reproducibility. Initial recommendations are presented and discussed in the talk. They are the foundation for further discussions in the Helmholtz Association and the Priority Initiative "Digital Information" of the Alliance of Science Organisations in Germany. The talk aims to inform about the activities and to link with other initiatives on the national or international level.

  4. Multimodality image registration with software: state-of-the-art

    Energy Technology Data Exchange (ETDEWEB)

    Slomka, Piotr J. [Cedars-Sinai Medical Center, AIM Program/Department of Imaging, Los Angeles, CA (United States); University of California, David Geffen School of Medicine, Los Angeles, CA (United States); Baum, Richard P. [Center for PET, Department of Nuclear Medicine, Bad Berka (Germany)

    2009-03-15

    Multimodality image integration of functional and anatomical data can be performed by means of dedicated hybrid imaging systems or by software image co-registration techniques. Hybrid positron emission tomography (PET)/computed tomography (CT) systems have found wide acceptance in oncological imaging, while software registration techniques have a significant role in patient-specific, cost-effective, and radiation dose-effective application of integrated imaging. Software techniques allow accurate (2-3 mm) rigid image registration of brain PET with CT and MRI. Nonlinear techniques are used in whole-body image registration, and recent developments allow for significantly accelerated computing times. Nonlinear software registration of PET with CT or MRI is required for multimodality radiation planning. Difficulties remain in the validation of nonlinear registration of soft tissue organs. The utilization of software-based multimodality image integration in a clinical environment is sometimes hindered by the lack of appropriate picture archiving and communication systems (PACS) infrastructure needed to efficiently and automatically integrate all available images into one common database. In cardiology applications, multimodality PET/single photon emission computed tomography and coronary CT angiography imaging is typically not required unless the results of one of the tests are equivocal. Software image registration is likely to be used in a complementary fashion with hybrid PET/CT or PET/magnetic resonance imaging systems. Software registration of stand-alone scans ''paved the way'' for the clinical application of hybrid scanners, demonstrating practical benefits of image integration before the hybrid dual-modality devices were available. (orig.)

  5. Using a CRIS for e-Infrastructure: e-Infrastructure for Scholarly Publications

    Directory of Open Access Journals (Sweden)

    E Dijk

    2010-05-01

    Full Text Available Scholarly publications are a major part of the research infrastructure. One way to make output available is to store the publications in Open Access Repositories (OAR. A Current Research Information System (CRIS that conforms to the standard CERIF (Common European Research Information Format could be a key component in the e-infrastructure. A CRIS provides the structure and makes it possible to interoperate the CRIS metadata at every stage of the research cycle. The international DRIVER projects are creating a European repository infrastructure. Knowledge Exchange has launched a project to develop a metadata exchange format for publications between CRIS and OAR systems.

  6. Software and the Scientist: Coding and Citation Practices in Geodynamics

    Science.gov (United States)

    Hwang, Lorraine; Fish, Allison; Soito, Laura; Smith, MacKenzie; Kellogg, Louise H.

    2017-11-01

    In geodynamics as in other scientific areas, computation has become a core component of research, complementing field observation, laboratory analysis, experiment, and theory. Computational tools for data analysis, mapping, visualization, modeling, and simulation are essential for all aspects of the scientific workflow. Specialized scientific software is often developed by geodynamicists for their own use, and this effort represents a distinctive intellectual contribution. Drawing on a geodynamics community that focuses on developing and disseminating scientific software, we assess the current practices of software development and attribution, as well as attitudes about the need and best practices for software citation. We analyzed publications by participants in the Computational Infrastructure for Geodynamics and conducted mixed method surveys of the solid earth geophysics community. From this we learned that coding skills are typically learned informally. Participants considered good code as trusted, reusable, readable, and not overly complex and considered a good coder as one that participates in the community in an open and reasonable manor contributing to both long- and short-term community projects. Participants strongly supported citing software reflected by the high rate a software package was named in the literature and the high rate of citations in the references. However, lacking are clear instructions from developers on how to cite and education of users on what to cite. In addition, citations did not always lead to discoverability of the resource. A unique identifier to the software package itself, community education, and citation tools would contribute to better attribution practices.

  7. Making green infrastructure healthier infrastructure.

    Science.gov (United States)

    Lõhmus, Mare; Balbus, John

    2015-01-01

    Increasing urban green and blue structure is often pointed out to be critical for sustainable development and climate change adaptation, which has led to the rapid expansion of greening activities in cities throughout the world. This process is likely to have a direct impact on the citizens' quality of life and public health. However, alongside numerous benefits, green and blue infrastructure also has the potential to create unexpected, undesirable, side-effects for health. This paper considers several potential harmful public health effects that might result from increased urban biodiversity, urban bodies of water, and urban tree cover projects. It does so with the intent of improving awareness and motivating preventive measures when designing and initiating such projects. Although biodiversity has been found to be associated with physiological benefits for humans in several studies, efforts to increase the biodiversity of urban environments may also promote the introduction and survival of vector or host organisms for infectious pathogens with resulting spread of a variety of diseases. In addition, more green connectivity in urban areas may potentiate the role of rats and ticks in the spread of infectious diseases. Bodies of water and wetlands play a crucial role in the urban climate adaptation and mitigation process. However, they also provide habitats for mosquitoes and toxic algal blooms. Finally, increasing urban green space may also adversely affect citizens allergic to pollen. Increased awareness of the potential hazards of urban green and blue infrastructure should not be a reason to stop or scale back projects. Instead, incorporating public health awareness and interventions into urban planning at the earliest stages can help insure that green and blue infrastructure achieves full potential for health promotion.

  8. Software Development in the Water Sciences: a view from the divide (Invited)

    Science.gov (United States)

    Miles, B.; Band, L. E.

    2013-12-01

    While training in statistical methods is an important part of many earth scientists' training, these scientists often learn the bulk of their software development skills in an ad hoc, just-in-time manner. Yet to carry out contemporary research scientists are spending more and more time developing software. Here I present perspectives - as an earth sciences graduate student with professional software engineering experience - on the challenges scientists face adopting software engineering practices, with an emphasis on areas of the science software development lifecycle that could benefit most from improved engineering. This work builds on experience gained as part of the NSF-funded Water Science Software Institute (WSSI) conceptualization award (NSF Award # 1216817). Throughout 2013, the WSSI team held a series of software scoping and development sprints with the goals of: (1) adding features to better model green infrastructure within the Regional Hydro-Ecological Simulation System (RHESSys); and (2) infusing test-driven agile software development practices into the processes employed by the RHESSys team. The goal of efforts such as the WSSI is to ensure that investments by current and future scientists in software engineering training will enable transformative science by improving both scientific reproducibility and researcher productivity. Experience with the WSSI indicates: (1) the potential for achieving this goal; and (2) while scientists are willing to adopt some software engineering practices, transformative science will require continued collaboration between domain scientists and cyberinfrastructure experts for the foreseeable future.

  9. Packaging of control system software

    International Nuclear Information System (INIS)

    Zagar, K.; Kobal, M.; Saje, N.; Zagar, A.; Sabjan, R.; Di Maio, F.; Stepanov, D.

    2012-01-01

    Control system software consists of several parts - the core of the control system, drivers for integration of devices, configuration for user interfaces, alarm system, etc. Once the software is developed and configured, it must be installed to computers where it runs. Usually, it is installed on an operating system whose services it needs, and also in some cases dynamically links with the libraries it provides. Operating system can be quite complex itself - for example, a typical Linux distribution consists of several thousand packages. To manage this complexity, we have decided to rely on Red Hat Package Management system (RPM) to package control system software, and also ensure it is properly installed (i.e., that dependencies are also installed, and that scripts are run after installation if any additional actions need to be performed). As dozens of RPM packages need to be prepared, we are reducing the amount of effort and improving consistency between packages through a Maven-based infrastructure that assists in packaging (e.g., automated generation of RPM SPEC files, including automated identification of dependencies). So far, we have used it to package EPICS, Control System Studio (CSS) and several device drivers. We perform extensive testing on Red Hat Enterprise Linux 5.5, but we have also verified that packaging works on CentOS and Scientific Linux. In this article, we describe in greater detail the systematic system of packaging we are using, and its particular application for the ITER CODAC Core System. (authors)

  10. Distribution of Trauma Care Facilities in Oman in Relation to High-Incidence Road Traffic Injury Sites: Pilot study.

    Science.gov (United States)

    Al-Kindi, Sara M; Naiem, Ahmed A; Taqi, Kadhim M; Al-Gheiti, Najla M; Al-Toobi, Ikhtiyar S; Al-Busaidi, Nasra Q; Al-Harthy, Ahmed Z; Taqi, Alaa M; Ba-Alawi, Sharif A; Al-Qadhi, Hani A

    2017-11-01

    Road traffic injuries (RTIs) are considered a major public health problem worldwide. In Oman, high numbers of RTIs and RTI-related deaths are frequently registered. This study aimed to evaluate the distribution of trauma care facilities in Oman with regards to their proximity to RTI-prevalent areas. This descriptive pilot study analysed RTI data recorded in the national Royal Oman Police registry from January to December 2014. The distribution of trauma care facilities was analysed by calculating distances between areas of peak RTI incidence and the closest trauma centre using Google Earth and Google Maps software (Google Inc., Googleplex, Mountain View, California, USA). A total of 32 trauma care facilities were identified. Four facilities (12.5%) were categorised as class V trauma centres. Of the facilities in Muscat, 42.9% were ranked as class IV or V. There were no class IV or V facilities in Musandam, Al-Wusta or Al-Buraimi. General surgery, orthopaedic surgery and neurosurgery services were available in 68.8%, 59.3% and 12.5% of the centres, respectively. Emergency services were available in 75.0% of the facilities. Intensive care units were available in 11 facilities, with four located in Muscat. The mean distance between a RTI hotspot and the nearest trauma care facility was 34.7 km; however, the mean distance to the nearest class IV or V facility was 83.3 km. The distribution and quality of trauma care facilities in Oman needs modification. It is recommended that certain centres upgrade their levels of trauma care in order to reduce RTI-associated morbidity and mortality in Oman.

  11. Telecom infrastructure leasing

    International Nuclear Information System (INIS)

    Henley, R.

    1995-01-01

    Slides to accompany a discussion about leasing telecommunications infrastructure, including radio/microwave tower space, radio control buildings, paging systems and communications circuits, were presented. The structure of Alberta Power Limited was described within the ATCO group of companies. Corporate goals and management practices and priorities were summarized. Lessons and experiences in the infrastructure leasing business were reviewed

  12. Activity Theory applied to Global Software Engineering: Theoretical Foundations and Implications for Tool Builders

    DEFF Research Database (Denmark)

    Tell, Paolo; Ali Babar, Muhammad

    2012-01-01

    Although a plethora of tools are available for Global Software Engineering (GSE) teams, it is being realized increasingly that the most prevalent desktop metaphor underpinning the majority of tools have several inherent limitations. We have proposed that Activity-Based Computing (ABC) can be a pr...... in building supporting infrastructure for GSE, and describe a proof of concept prototype....

  13. Workshop and conference on Grand Challenges applications and software technology

    Energy Technology Data Exchange (ETDEWEB)

    1993-12-31

    On May 4--7, 1993, nine federal agencies sponsored a four-day meeting on Grand Challenge applications and software technology. The objective was to bring High-Performance Computing and Communications (HPCC) Grand Challenge applications research groups supported under the federal HPCC program together with HPCC software technologists to: discuss multidisciplinary computational science research issues and approaches, identify major technology challenges facing users and providers, and refine software technology requirements for Grand Challenge applications research. The first day and a half focused on applications. Presentations were given by speakers from universities, national laboratories, and government agencies actively involved in Grand Challenge research. Five areas of research were covered: environmental and earth sciences; computational physics; computational biology, chemistry, and materials sciences; computational fluid and plasma dynamics; and applications of artificial intelligence. The next day and a half was spent in working groups in which the applications researchers were joined by software technologists. Nine breakout sessions took place: I/0, Data, and File Systems; Parallel Programming Paradigms; Performance Characterization and Evaluation of Massively Parallel Processing Applications; Program Development Tools; Building Multidisciplinary Applications; Algorithm and Libraries I; Algorithms and Libraries II; Graphics and Visualization; and National HPCC Infrastructure.

  14. Software Testing and Verification in Climate Model Development

    Science.gov (United States)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  15. Transport Infrastructure and Economic Growth: Spatial Effects

    Directory of Open Access Journals (Sweden)

    Artyom Gennadyevich Isaev

    2015-09-01

    Full Text Available The author specifies an empirical framework of neoclassical growth model in order to examine impact of transport infrastructure on economic growth in Russian regions during period of 2000-2013. Two different effects of infrastructure are considered. First, infrastructure is viewed as part of region’s own production function. Second, infrastructure generates spillover effect on adjacent regions’ economic performance which can be negative or positive. Results imply that road infrastructure has a positive influence on regional growth, but sign of railroad infrastructure coefficient depends on whether or not congestion effect is considered. Negative spillover effect is shown to exist in the case of road infrastructure. This apparently means that rapid road infrastructure development in some regions moves mobile factors of production away from adjacent regions retarding their economic development. The spillover effect of railroad infrastructure is significant and negative again only if congestion effect is considered. The results of estimation for the Far East and Baikal Regions separately demonstrate no significant effect of both types of infrastructure for economic performance and negative spillover effect of road infrastructure

  16. The requirements and challenges in preventing of road traffic injury in Iran. A qualitative study.

    Science.gov (United States)

    Khorasani-Zavareh, Davoud; Mohammadi, Reza; Khankeh, Hamid Reza; Laflamme, Lucie; Bikmoradi, Ali; Haglund, Bo J A

    2009-12-23

    Road traffic injuries (RTIs) are a major public health problem, especially in low- and middle-income countries. Among middle-income countries, Iran has one of the highest mortality rates from RTIs. Action is critical to combat this major public health problem. Stakeholders involved in RTI control are of key importance and their perceptions of barriers and facilitators are a vital source of knowledge. The aim of this study was to explore barriers to the prevention of RTIs and provide appropriate suggestions for prevention, based on the perceptions of stakeholders, victims and road-users as regards RTIs. Thirty-eight semi-structured interviews were conducted with informants in the field of RTI prevention including: police officers; public health professionals; experts from the road administrators; representatives from the General Governor, the car industry, firefighters; experts from Emergency Medical Service and the Red Crescent; and some motorcyclists and car drivers as well as victims of RTIs. A qualitative approach using grounded theory method was employed to analyze the material gathered. The core variable was identified as "The lack of a system approach to road-user safety". The following barriers in relation to RTI prevention were identified as: human factors; transportation system; and organizational coordination. Suggestions for improvement included education (for the general public and targeted group training), more effective legislation, more rigorous law enforcement, improved engineering in road infrastructure, and an integrated organization to supervise and coordinate preventive activities. The major barriers identified in this study were human factors and efforts to change human behaviour were suggested by means of public education campaigns and stricter law enforcement. However, the lack of a system approach to RTI prevention was also an important concern. There is an urgent need for both an integrated system to coordinate RTI activities and prevention

  17. The requirements and challenges in preventing of road traffic injury in Iran. A qualitative study

    Directory of Open Access Journals (Sweden)

    Laflamme Lucie

    2009-12-01

    Full Text Available Abstract Background Road traffic injuries (RTIs are a major public health problem, especially in low- and middle-income countries. Among middle-income countries, Iran has one of the highest mortality rates from RTIs. Action is critical to combat this major public health problem. Stakeholders involved in RTI control are of key importance and their perceptions of barriers and facilitators are a vital source of knowledge. The aim of this study was to explore barriers to the prevention of RTIs and provide appropriate suggestions for prevention, based on the perceptions of stakeholders, victims and road-users as regards RTIs. Methods Thirty-eight semi-structured interviews were conducted with informants in the field of RTI prevention including: police officers; public health professionals; experts from the road administrators; representatives from the General Governor, the car industry, firefighters; experts from Emergency Medical Service and the Red Crescent; and some motorcyclists and car drivers as well as victims of RTIs. A qualitative approach using grounded theory method was employed to analyze the material gathered. Results The core variable was identified as "The lack of a system approach to road-user safety". The following barriers in relation to RTI prevention were identified as: human factors; transportation system; and organizational coordination. Suggestions for improvement included education (for the general public and targeted group training, more effective legislation, more rigorous law enforcement, improved engineering in road infrastructure, and an integrated organization to supervise and coordinate preventive activities. Conclusion The major barriers identified in this study were human factors and efforts to change human behaviour were suggested by means of public education campaigns and stricter law enforcement. However, the lack of a system approach to RTI prevention was also an important concern. There is an urgent need for both

  18. Infrastructures for healthcare

    DEFF Research Database (Denmark)

    Langhoff, Tue Odd; Amstrup, Mikkel Hvid; Mørck, Peter

    2018-01-01

    The Danish General Practitioners Database has over more than a decade developed into a large-scale successful information infrastructure supporting medical research in Denmark. Danish general practitioners produce the data, by coding all patient consultations according to a certain set of classif...... synergy into account, if not to risk breaking down the fragile nature of otherwise successful information infrastructures supporting research on healthcare....

  19. Building safeguards infrastructure

    International Nuclear Information System (INIS)

    McClelland-Kerr, J.; Stevens, J.

    2010-01-01

    Much has been written in recent years about the nuclear renaissance - the rebirth of nuclear power as a clean and safe source of electricity around the world. Those who question the nuclear renaissance often cite the risk of proliferation, accidents or an attack on a facility as concerns, all of which merit serious consideration. The integration of three areas - sometimes referred to as 3S, for safety, security and safeguards - is essential to supporting the clean and safe growth of nuclear power, and the infrastructure that supports these three areas should be robust. The focus of this paper will be on the development of the infrastructure necessary to support safeguards, and the integration of safeguards infrastructure with other elements critical to ensuring nuclear energy security

  20. System Architecture Development for Energy and Water Infrastructure Data Management and Geovisual Analytics

    Science.gov (United States)

    Berres, A.; Karthik, R.; Nugent, P.; Sorokine, A.; Myers, A.; Pang, H.

    2017-12-01

    Building an integrated data infrastructure that can meet the needs of a sustainable energy-water resource management requires a robust data management and geovisual analytics platform, capable of cross-domain scientific discovery and knowledge generation. Such a platform can facilitate the investigation of diverse complex research and policy questions for emerging priorities in Energy-Water Nexus (EWN) science areas. Using advanced data analytics, machine learning techniques, multi-dimensional statistical tools, and interactive geovisualization components, such a multi-layered federated platform is being developed, the Energy-Water Nexus Knowledge Discovery Framework (EWN-KDF). This platform utilizes several enterprise-grade software design concepts and standards such as extensible service-oriented architecture, open standard protocols, event-driven programming model, enterprise service bus, and adaptive user interfaces to provide a strategic value to the integrative computational and data infrastructure. EWN-KDF is built on the Compute and Data Environment for Science (CADES) environment in Oak Ridge National Laboratory (ORNL).

  1. Site development and demands on infrastructure

    International Nuclear Information System (INIS)

    Nieke, K.F.

    1976-01-01

    All sub-fields are examined which form the infrastructure, the infrastructure being indispensable for the site development of a nuclear power plant. The main emphasis is put on the technical infrastructure, but the social infrastructure is dealt with, too. The most important sub-fields are: traffic connections, energy supply, external communications, foundation, building mearures. (UA) [de

  2. Making green infrastructure healthier infrastructure

    Directory of Open Access Journals (Sweden)

    Mare Lõhmus

    2015-11-01

    Full Text Available Increasing urban green and blue structure is often pointed out to be critical for sustainable development and climate change adaptation, which has led to the rapid expansion of greening activities in cities throughout the world. This process is likely to have a direct impact on the citizens’ quality of life and public health. However, alongside numerous benefits, green and blue infrastructure also has the potential to create unexpected, undesirable, side-effects for health. This paper considers several potential harmful public health effects that might result from increased urban biodiversity, urban bodies of water, and urban tree cover projects. It does so with the intent of improving awareness and motivating preventive measures when designing and initiating such projects. Although biodiversity has been found to be associated with physiological benefits for humans in several studies, efforts to increase the biodiversity of urban environments may also promote the introduction and survival of vector or host organisms for infectious pathogens with resulting spread of a variety of diseases. In addition, more green connectivity in urban areas may potentiate the role of rats and ticks in the spread of infectious diseases. Bodies of water and wetlands play a crucial role in the urban climate adaptation and mitigation process. However, they also provide habitats for mosquitoes and toxic algal blooms. Finally, increasing urban green space may also adversely affect citizens allergic to pollen. Increased awareness of the potential hazards of urban green and blue infrastructure should not be a reason to stop or scale back projects. Instead, incorporating public health awareness and interventions into urban planning at the earliest stages can help insure that green and blue infrastructure achieves full potential for health promotion.

  3. Open Polar Server (OPS—An Open Source Infrastructure for the Cryosphere Community

    Directory of Open Access Journals (Sweden)

    Weibo Liu

    2016-03-01

    Full Text Available The Center for Remote Sensing of Ice Sheets (CReSIS at the University of Kansas has collected approximately 1000 terabytes (TB of radar depth sounding data over the Arctic and Antarctic ice sheets since 1993 in an effort to map the thickness of the ice sheets and ultimately understand the impacts of climate change and sea level rise. In addition to data collection, the storage, management, and public distribution of the dataset are also primary roles of the CReSIS. The Open Polar Server (OPS project developed a free and open source infrastructure to store, manage, analyze, and distribute the data collected by CReSIS in an effort to replace its current data storage and distribution approach. The OPS infrastructure includes a spatial database management system (DBMS, map and web server, JavaScript geoportal, and MATLAB application programming interface (API for the inclusion of data created by the cryosphere community. Open source software including GeoServer, PostgreSQL, PostGIS, OpenLayers, ExtJS, GeoEXT and others are used to build a system that modernizes the CReSIS data distribution for the entire cryosphere community and creates a flexible platform for future development. Usability analysis demonstrates the OPS infrastructure provides an improved end user experience. In addition, interpolating glacier topography is provided as an application example of the system.

  4. New infrastructures, new landscapes

    Directory of Open Access Journals (Sweden)

    Chiara Nifosì

    2014-06-01

    Full Text Available New infrastructures, new landscapes AbstractThe paper will discuss one recent Italian project that share a common background: the relevance of the existing maritime landscape as a non negotiable value. The studies will be discussed in details a feasibility study for the new port in Monfalcone. National infrastructural policies emphasize competitiveness and connection as a central issue incultural, economic and political development of communities . Based on networks and system development along passageways that make up the European infrastructural armor; the two are considered at the meantime as cause and effect of "territorialisation”. These two views are obviously mutually dependent. It's hard to think about a strong attractiveness out of the network, and to be part of the latter encourages competitiveness. Nonetheless this has proved to be conflictual when landscape values and the related attractiveness are considered.The presented case study project, is pursuing the ambition to promote a new approach in realizing large infrastructures; its double role is to improve connectivity and to generate lasting and positive impact on the local regions. It deal with issues of inter-modality and the construction of nodes and lines which connects Europe, and its markets.Reverting the usual approach which consider landscape project as as a way to mitigate or to compensate for the infrastructure, the goal is to succeed in realizing large infrastructural works by conceiving them as an occasion to reinterpret a region or, as extraordinary opportunities, to build new landscapes.The strategy proposed consists in achieving structural images based on the reinforcement of the environmental and historical-landscape systems. Starting from the reinterpretation of local maritime context and resources it is possible not just to preserve the attractiveness of a specific landscape but also to conceive infrastructure in a more efficient way. 

  5. Coupling Sensing Hardware with Data Interrogation Software for Structural Health Monitoring

    Directory of Open Access Journals (Sweden)

    Charles R. Farrar

    2006-01-01

    Full Text Available The process of implementing a damage detection strategy for aerospace, civil and mechanical engineering infrastructure is referred to as structural health monitoring (SHM. The authors' approach is to address the SHM problem in the context of a statistical pattern recognition paradigm. In this paradigm, the process can be broken down into four parts: (1 Operational Evaluation, (2 Data Acquisition and Cleansing, (3 Feature Extraction and Data Compression, and (4 Statistical Model Development for Feature Discrimination. These processes must be implemented through hardware or software and, in general, some combination of these two approaches will be used. This paper will discuss each portion of the SHM process with particular emphasis on the coupling of a general purpose data interrogation software package for structural health monitoring with a modular wireless sensing and processing platform. More specifically, this paper will address the need to take an integrated hardware/software approach to developing SHM solutions.

  6. CERN printing infrastructure

    International Nuclear Information System (INIS)

    Otto, R; Sucik, J

    2008-01-01

    For many years CERN had a very sophisticated print server infrastructure [13] which supported several different protocols (AppleTalk, IPX and TCP/IP) and many different printing standards. Today's situation differs a lot: we have a much more homogenous network infrastructure, where TCP/IP is used everywhere and we have less printer models, which almost all work using current standards (i.e. they all provide PostScript drivers). This change gave us the possibility to review the printing architecture aiming at simplifying the infrastructure in order to achieve full automation of the service. The new infrastructure offers both: LPD service exposing print queues to Linux and Mac OS X computers and native printing for Windows based clients. The printer driver distribution is automatic and native on Windows and automated by custom mechanisms on Linux, where the appropriate Foomatic drivers are configured. Also the process of printer registration and queue creation is completely automated following the printer registration in the network database. At the end of 2006 we have moved all (∼1200) CERN printers and all users' connections at CERN to the new service. This paper will describe the new architecture and summarize the process of migration

  7. Eucalyptus: an open-source cloud computing infrastructure

    International Nuclear Information System (INIS)

    Nurmi, Daniel; Wolski, Rich; Grzegorczyk, Chris; Obertelli, Graziano; Soman, Sunil; Youseff, Lamia; Zagorodnov, Dmitrii

    2009-01-01

    Utility computing, elastic computing, and cloud computing are all terms that refer to the concept of dynamically provisioning processing time and storage space from a ubiquitous 'cloud' of computational resources. Such systems allow users to acquire and release the resources on demand and provide ready access to data from processing elements, while relegating the physical location and exact parameters of the resources. Over the past few years, such systems have become increasingly popular, but nearly all current cloud computing offerings are either proprietary or depend upon software infrastructure that is invisible to the research community. In this work, we present Eucalyptus, an open-source software implementation of cloud computing that utilizes compute resources that are typically available to researchers, such as clusters and workstation farms. In order to foster community research exploration of cloud computing systems, the design of Eucalyptus emphasizes modularity, allowing researchers to experiment with their own security, scalability, scheduling, and interface implementations. In this paper, we outline the design of Eucalyptus, describe our own implementations of the modular system components, and provide results from experiments that measure performance and scalability of a Eucalyptus installation currently deployed for public use. The main contribution of our work is the presentation of the first research-oriented open-source cloud computing system focused on enabling methodical investigations into the programming, administration, and deployment of systems exploring this novel distributed computing model.

  8. EV Charging Infrastructure Roadmap

    International Nuclear Information System (INIS)

    Karner, Donald; Garetson, Thomas; Francfort, Jim

    2016-01-01

    As highlighted in the U.S. Department of Energy's EV Everywhere Grand Challenge, vehicle technology is advancing toward an objective to ''... produce plug-in electric vehicles that are as affordable and convenient for the average American family as today's gasoline-powered vehicles ...'' [1] by developing more efficient drivetrains, greater battery energy storage per dollar, and lighter-weight vehicle components and construction. With this technology advancement and improved vehicle performance, the objective for charging infrastructure is to promote vehicle adoption and maximize the number of electric miles driven. The EV Everywhere Charging Infrastructure Roadmap (hereafter referred to as Roadmap) looks forward and assumes that the technical challenges and vehicle performance improvements set forth in the EV Everywhere Grand Challenge will be met. The Roadmap identifies and prioritizes deployment of charging infrastructure in support of this charging infrastructure objective for the EV Everywhere Grand Challenge

  9. Infrastructure Gap in South Asia: Inequality of Access to Infrastructure Services

    OpenAIRE

    Biller, Dan; Andrés, Luis; Herrera Dappe, Matías

    2014-01-01

    The South Asia region is home to the largest pool of individuals living under the poverty line, coupled with a fast-growing population. The importance of access to basic infrastructure services on welfare and the quality of life is clear. Yet the South Asia region's rates of access to infrastructure (sanitation, electricity, telecom, and transport) are closer to those of Sub-Saharan Africa...

  10. Transport Infrastructure Slot Allocation

    NARCIS (Netherlands)

    Koolstra, K.

    2005-01-01

    In this thesis, transport infrastructure slot allocation has been studied, focusing on selection slot allocation, i.e. on longer-term slot allocation decisions determining the traffic patterns served by infrastructure bottlenecks, rather than timetable-related slot allocation problems. The

  11. The Satellite Data Thematic Core Service within the EPOS Research Infrastructure

    Science.gov (United States)

    Manunta, Michele; Casu, Francesco; Zinno, Ivana; De Luca, Claudio; Buonanno, Sabatino; Zeni, Giovanni; Wright, Tim; Hooper, Andy; Diament, Michel; Ostanciaux, Emilie; Mandea, Mioara; Walter, Thomas; Maccaferri, Francesco; Fernandez, Josè; Stramondo, Salvatore; Bignami, Christian; Bally, Philippe; Pinto, Salvatore; Marin, Alessandro; Cuomo, Antonio

    2017-04-01

    EPOS, the European Plate Observing System, is a long-term plan to facilitate the integrated use of data, data products, software and services, available from distributed Research Infrastructures (RI), for solid Earth science in Europe. Indeed, EPOS integrates a large number of existing European RIs belonging to several fields of the Earth science, from seismology to geodesy, near fault and volcanic observatories as well as anthropogenic hazards. The EPOS vision is that the integration of the existing national and trans-national research infrastructures will increase access and use of the multidisciplinary data recorded by the solid Earth monitoring networks, acquired in laboratory experiments and/or produced by computational simulations. The establishment of EPOS will foster the interoperability of products and services in the Earth science field to a worldwide community of users. Accordingly, the EPOS aim is to integrate the diverse and advanced European Research Infrastructures for solid Earth science, and build on new e-science opportunities to monitor and understand the dynamic and complex solid-Earth System. One of the EPOS Thematic Core Services (TCS), referred to as Satellite Data, aims at developing, implementing and deploying advanced satellite data products and services, mainly based on Copernicus data (namely Sentinel acquisitions), for the Earth science community. This work intends to present the technological enhancements, fostered by EPOS, to deploy effective satellite services in a harmonized and integrated way. In particular, the Satellite Data TCS will deploy five services, EPOSAR, GDM, COMET, 3D-Def and MOD, which are mainly based on the exploitation of SAR data acquired by the Sentinel-1 constellation and designed to provide information on Earth surface displacements. In particular, the planned services will provide both advanced DInSAR products (deformation maps, velocity maps, deformation time series) and value-added measurements (source model

  12. Physical resources and infrastructure

    NARCIS (Netherlands)

    Foeken, D.W.J.; Hoorweg, J.; Foeken, D.W.J.; Obudho, R.A.

    2000-01-01

    This chapter describes the main physical characteristics as well as the main physical and social infrastructure features of Kenya's coastal region. Physical resources include relief, soils, rainfall, agro-ecological zones and natural resources. Aspects of the physical infrastructure discussed are

  13. Carbon emissions of infrastructure development.

    Science.gov (United States)

    Müller, Daniel B; Liu, Gang; Løvik, Amund N; Modaresi, Roja; Pauliuk, Stefan; Steinhoff, Franciska S; Brattebø, Helge

    2013-10-15

    Identifying strategies for reconciling human development and climate change mitigation requires an adequate understanding of how infrastructures contribute to well-being and greenhouse gas emissions. While direct emissions from infrastructure use are well-known, information about indirect emissions from their construction is highly fragmented. Here, we estimated the carbon footprint of the existing global infrastructure stock in 2008, assuming current technologies, to be 122 (-20/+15) Gt CO2. The average per-capita carbon footprint of infrastructures in industrialized countries (53 (± 6) t CO2) was approximately 5 times larger that that of developing countries (10 (± 1) t CO2). A globalization of Western infrastructure stocks using current technologies would cause approximately 350 Gt CO2 from materials production, which corresponds to about 35-60% of the remaining carbon budget available until 2050 if the average temperature increase is to be limited to 2 °C, and could thus compromise the 2 °C target. A promising but poorly explored mitigation option is to build new settlements using less emissions-intensive materials, for example by urban design; however, this strategy is constrained by a lack of bottom-up data on material stocks in infrastructures. Infrastructure development must be considered in post-Kyoto climate change agreements if developing countries are to participate on a fair basis.

  14. EV Charging Infrastructure Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    Karner, Donald [Electric Transportation Inc., Rogers, AR (United States); Garetson, Thomas [Electric Transportation Inc., Rogers, AR (United States); Francfort, Jim [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    As highlighted in the U.S. Department of Energy’s EV Everywhere Grand Challenge, vehicle technology is advancing toward an objective to “… produce plug-in electric vehicles that are as affordable and convenient for the average American family as today’s gasoline-powered vehicles …” [1] by developing more efficient drivetrains, greater battery energy storage per dollar, and lighter-weight vehicle components and construction. With this technology advancement and improved vehicle performance, the objective for charging infrastructure is to promote vehicle adoption and maximize the number of electric miles driven. The EV Everywhere Charging Infrastructure Roadmap (hereafter referred to as Roadmap) looks forward and assumes that the technical challenges and vehicle performance improvements set forth in the EV Everywhere Grand Challenge will be met. The Roadmap identifies and prioritizes deployment of charging infrastructure in support of this charging infrastructure objective for the EV Everywhere Grand Challenge

  15. Developing an infrastructure index : phase I.

    Science.gov (United States)

    2010-04-01

    Over the past decade the American Society of Civil Engineers has used the Infrastructure Report : Card to raise awareness of infrastructure issues. Aging and deteriorating infrastructure has : recently been highlighted in the popular media. However, ...

  16. Infrastructure Engineering and Deployment Division

    Data.gov (United States)

    Federal Laboratory Consortium — Volpe's Infrastructure Engineering and Deployment Division advances transportation innovation by being leaders in infrastructure technology, including vehicles and...

  17. Improving the quality of EMI Releases by leveraging the EMI Testing Infrastructure

    International Nuclear Information System (INIS)

    Aiftimiei, C; Ceccanti, A; Dongiovanni, D; Giacomini, F; Meglio, A Di

    2012-01-01

    What is an EMI Release? What is its life cycle? How is its quality assured through a continuous integration and large scale acceptance testing? These are the main questions that this article will answer, by presenting the EMI release management process with emphasis on the role played by the Testing Infrastructure in improving the quality of the middleware provided by the project. The European Middleware Initiative (EMI) is a close collaboration of four major European technology providers: ARC, gLite, UNICORE and dCache. Its main objective is to deliver a consolidated set of components for deployment in EGI (as part of the Unified Middleware Distribution, UMD), PRACE and other DCIs. The harmonized set of EMI components thus enables the interoperability and integration between Grids. EMI aims at creating an effective environment that satisfies the requirements of the scientific communities relying on it. The EMI distribution is organized in periodic major releases whose development and maintenance follow a 5-phase yearly cycle: i) requirements collection and analysis; ii) development and test planning; iii) software development, testing and certification; iv) release certification and validation and v) release and maintenance. In this article we present in detail the implementation of operational and infrastructural resources supporting the certification and validation phase of the release. The main goal of this phase is to harmonize into a single release the strongly inter-dependent products coming from various development teams through parallel certification paths. To achieve this goal the continuous integration and large scale acceptance testing performed on the EMI Testing Infrastructure plays a key role. The purpose of this infrastructure is to provide a system where both the production and the release candidate product versions are deployed. On this system inter-component testing by different product team testers can concurrently take place. The Testing

  18. ETICS meta-data software editing - from check out to commit operations

    International Nuclear Information System (INIS)

    Begin, M-E; Sancho, G D-A; Ronco, S D; Gentilini, M; Ronchieri, E; Selmi, M

    2008-01-01

    People involved in modular projects need to improve the build software process, planning the correct execution order and detecting circular dependencies. The lack of suitable tools may cause delays in the development, deployment and maintenance of the software. Experience in such projects has shown that the use of version control and build systems is not able to support the development of the software efficiently, due to a large number of errors each of which causes the breaking of the build process. Common causes of errors are for example the adoption of new libraries, libraries incompatibility, the extension of the current project in order to support new software modules. In this paper, we describe a possible solution implemented in ETICS, an integrated infrastructure for the automated configuration, build and test of Grid and distributed software. ETICS has defined meta-data software abstractions, from which it is possible to download, build and test software projects, setting for instance dependencies, environment variables and properties. Furthermore, the meta-data information is managed by ETICS reflecting the version control system philosophy, because of the existence of a meta-data repository and the handling of a list of operations, such as check out and commit. All the information related to a specific software are stored in the repository only when they are considered to be correct. By means of this solution, we introduce a sort of flexibility inside the ETICS system, allowing users to work accordingly to their needs. Moreover, by introducing this functionality, ETICS will be a version control system like for the management of the meta-data

  19. Building an evaluation infrastructure

    DEFF Research Database (Denmark)

    Brandrup, Morten; Østergaard, Kija Lin

    Infrastructuring does not happen by itself; it must be supported. In this paper, we present a feedback mechanism implemented as a smartphone-based application, inspired by the concept of infrastructure probes, which supports the in situ elicitation of feedback. This is incorporated within an eval...

  20. Flowscapes: Designing infrastructure as landscape

    OpenAIRE

    Nijhuis, S.; Jauslin, D.T.; Van der Hoeven, F.D.

    2015-01-01

    Social, cultural and technological developments of our society are demanding a fundamental review of the planning and design of its landscapes and infrastructures, in particular in relation to environmental issues and sustainability. Transportation, green and water infrastructures are important agents that facilitate processes that shape the built environment and its contemporary landscapes. With movement and flows at the core, these landscape infrastructures facilitate aesthetic, functional,...

  1. A Modular Repository-based Infrastructure for Simulation Model Storage and Execution Support in the Context of In Silico Oncology and In Silico Medicine.

    Science.gov (United States)

    Christodoulou, Nikolaos A; Tousert, Nikolaos E; Georgiadi, Eleni Ch; Argyri, Katerina D; Misichroni, Fay D; Stamatakos, Georgios S

    2016-01-01

    The plethora of available disease prediction models and the ongoing process of their application into clinical practice - following their clinical validation - have created new needs regarding their efficient handling and exploitation. Consolidation of software implementations, descriptive information, and supportive tools in a single place, offering persistent storage as well as proper management of execution results, is a priority, especially with respect to the needs of large healthcare providers. At the same time, modelers should be able to access these storage facilities under special rights, in order to upgrade and maintain their work. In addition, the end users should be provided with all the necessary interfaces for model execution and effortless result retrieval. We therefore propose a software infrastructure, based on a tool, model and data repository that handles the storage of models and pertinent execution-related data, along with functionalities for execution management, communication with third-party applications, user-friendly interfaces to access and use the infrastructure with minimal effort and basic security features.

  2. Chef infrastructure automation cookbook

    CERN Document Server

    Marschall, Matthias

    2013-01-01

    Chef Infrastructure Automation Cookbook contains practical recipes on everything you will need to automate your infrastructure using Chef. The book is packed with illustrated code examples to automate your server and cloud infrastructure.The book first shows you the simplest way to achieve a certain task. Then it explains every step in detail, so that you can build your knowledge about how things work. Eventually, the book shows you additional things to consider for each approach. That way, you can learn step-by-step and build profound knowledge on how to go about your configuration management

  3. Frameworks for Performing on Cloud Automated Software Testing Using Swarm Intelligence Algorithm: Brief Survey

    Directory of Open Access Journals (Sweden)

    Mohammad Hossain

    2018-04-01

    Full Text Available This paper surveys on Cloud Based Automated Testing Software that is able to perform Black-box testing, White-box testing, as well as Unit and Integration Testing as a whole. In this paper, we discuss few of the available automated software testing frameworks on the cloud. These frameworks are found to be more efficient and cost effective because they execute test suites over a distributed cloud infrastructure. One of the framework effectiveness was attributed to having a module that accepts manual test cases from users and it prioritize them accordingly. Software testing, in general, accounts for as much as 50% of the total efforts of the software development project. To lessen the efforts, one the frameworks discussed in this paper used swarm intelligence algorithms. It uses the Ant Colony Algorithm for complete path coverage to minimize time and the Bee Colony Optimization (BCO for regression testing to ensure backward compatibility.

  4. Geographic Hotspots of Critical National Infrastructure.

    Science.gov (United States)

    Thacker, Scott; Barr, Stuart; Pant, Raghav; Hall, Jim W; Alderson, David

    2017-12-01

    Failure of critical national infrastructures can result in major disruptions to society and the economy. Understanding the criticality of individual assets and the geographic areas in which they are located is essential for targeting investments to reduce risks and enhance system resilience. Within this study we provide new insights into the criticality of real-life critical infrastructure networks by integrating high-resolution data on infrastructure location, connectivity, interdependence, and usage. We propose a metric of infrastructure criticality in terms of the number of users who may be directly or indirectly disrupted by the failure of physically interdependent infrastructures. Kernel density estimation is used to integrate spatially discrete criticality values associated with individual infrastructure assets, producing a continuous surface from which statistically significant infrastructure criticality hotspots are identified. We develop a comprehensive and unique national-scale demonstration for England and Wales that utilizes previously unavailable data from the energy, transport, water, waste, and digital communications sectors. The testing of 200,000 failure scenarios identifies that hotspots are typically located around the periphery of urban areas where there are large facilities upon which many users depend or where several critical infrastructures are concentrated in one location. © 2017 Society for Risk Analysis.

  5. CERN printing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Otto, R; Sucik, J [CERN, Geneva (Switzerland)], E-mail: Rafal.Otto@cern.ch, E-mail: Juraj.Sucik@cern.ch

    2008-07-15

    For many years CERN had a very sophisticated print server infrastructure [13] which supported several different protocols (AppleTalk, IPX and TCP/IP) and many different printing standards. Today's situation differs a lot: we have a much more homogenous network infrastructure, where TCP/IP is used everywhere and we have less printer models, which almost all work using current standards (i.e. they all provide PostScript drivers). This change gave us the possibility to review the printing architecture aiming at simplifying the infrastructure in order to achieve full automation of the service. The new infrastructure offers both: LPD service exposing print queues to Linux and Mac OS X computers and native printing for Windows based clients. The printer driver distribution is automatic and native on Windows and automated by custom mechanisms on Linux, where the appropriate Foomatic drivers are configured. Also the process of printer registration and queue creation is completely automated following the printer registration in the network database. At the end of 2006 we have moved all ({approx}1200) CERN printers and all users' connections at CERN to the new service. This paper will describe the new architecture and summarize the process of migration.

  6. Model for Railway Infrastructure Management Organization

    Directory of Open Access Journals (Sweden)

    Gordan Stojić

    2012-03-01

    Full Text Available The provision of appropriate quality rail services has an important role in terms of railway infrastructure: quality of infrastructure maintenance, regulation of railway traffic, line capacity, speed, safety, train station organization, the allowable lines load and other infrastructure parameters.The analysis of experiences in transforming the railway systems points to the conclusion that there is no unique solution in terms of choice for institutional rail infrastructure management modes, although more than nineteen years have passed from the beginning of the implementation of the Directive 91/440/EEC. Depending on the approach to the process of restructuring the national railway company, adopted regulations and caution in its implementation, the existence or absence of a clearly defined transport strategy, the willingness to liberalize the transport market, there are several different ways for institutional management of railway infrastructure.A hybrid model for selection of modes of institutional rail infrastructure management was developed based on the theory of artificial intelligence, theory of fuzzy sets and theory of multicriteria optimization.KEY WORDSmanagement, railway infrastructure, organizational structure, hybrid model

  7. Data Centre Infrastructure & Data Storage @ Facebook

    CERN Multimedia

    CERN. Geneva; Garson, Matt; Kauffman, Mike

    2018-01-01

    Several speakers from the Facebook company will present their take on the infrastructure of their Data Center and Storage facilities, as follows: 10:00 - Facebook Data Center Infrastructure, by Delfina Eberly, Mike Kauffman and Veerendra Mulay Insight into how Facebook thinks about data center design, including electrical and cooling systems, and the technology and tooling used to manage data centers. 11:00 - Storage at Facebook, by Matt Garson An overview of Facebook infrastructure, focusing on different storage systems, in particular photo/video storage and storage for data analytics. About the speakers Mike Kauffman, Director, Data Center Site Engineering Delfina Eberly, Infrastructure, Site Services Matt Garson, Storage at Facebook Veerendra Mulay, Infrastructure

  8. Trustworthiness Measurement Algorithm for TWfMS Based on Software Behaviour Entropy

    Directory of Open Access Journals (Sweden)

    Qiang Han

    2018-03-01

    Full Text Available As the virtual mirror of complex real-time business processes of organisations’ underlying information systems, the workflow management system (WfMS has emerged in recent decades as a new self-autonomous paradigm in the open, dynamic, distributed computing environment. In order to construct a trustworthy workflow management system (TWfMS, the design of a software behaviour trustworthiness measurement algorithm is an urgent task for researchers. Accompanying the trustworthiness mechanism, the measurement algorithm, with uncertain software behaviour trustworthiness information of the WfMS, should be resolved as an infrastructure. Based on the framework presented in our research prior to this paper, we firstly introduce a formal model for the WfMS trustworthiness measurement, with the main property reasoning based on calculus operators. Secondly, this paper proposes a novel measurement algorithm from the software behaviour entropy of calculus operators through the principle of maximum entropy (POME and the data mining method. Thirdly, the trustworthiness measurement algorithm for incomplete software behaviour tests and runtime information is discussed and compared by means of a detailed explanation. Finally, we provide conclusions and discuss certain future research areas of the TWfMS.

  9. End-to-end Information Flow Security Model for Software-Defined Networks

    Directory of Open Access Journals (Sweden)

    D. Ju. Chaly

    2015-01-01

    Full Text Available Software-defined networks (SDN are a novel paradigm of networking which became an enabler technology for many modern applications such as network virtualization, policy-based access control and many others. Software can provide flexibility and fast-paced innovations in the networking; however, it has a complex nature. In this connection there is an increasing necessity of means for assuring its correctness and security. Abstract models for SDN can tackle these challenges. This paper addresses to confidentiality and some integrity properties of SDNs. These are critical properties for multi-tenant SDN environments, since the network management software must ensure that no confidential data of one tenant are leaked to other tenants in spite of using the same physical infrastructure. We define a notion of end-to-end security in context of software-defined networks and propose a semantic model where the reasoning is possible about confidentiality, and we can check that confidential information flows do not interfere with non-confidential ones. We show that the model can be extended in order to reason about networks with secure and insecure links which can arise, for example, in wireless environments.The article is published in the authors’ wording.

  10. TCIA Secure Cyber Critical Infrastructure Modernization.

    Energy Technology Data Exchange (ETDEWEB)

    Keliiaa, Curtis M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The Sandia National Laboratories (Sandia Labs) tribal cyber infrastructure assurance initiative was developed in response to growing national cybersecurity concerns in the the sixteen Department of Homeland Security (DHS) defined critical infrastructure sectors1. Technical assistance is provided for the secure modernization of critical infrastructure and key resources from a cyber-ecosystem perspective with an emphasis on enhanced security, resilience, and protection. Our purpose is to address national critical infrastructure challenges as a shared responsibility.

  11. High available and fault tolerant mobile communications infrastructure

    DEFF Research Database (Denmark)

    Beiroumi, Mohammad Zib

    2006-01-01

    using rollback or replication techniques inapplicable. This dissertation presents a novel failure recovery approach based on a behavioral model of the communication protocols. The new recovery method is able to deal with software and hardware faults and is particularly suitable for mobile communications...... as it is the case for many recovery techniques. In addition, the method does not require any modification to mobile clients. The Communicating Extended Finite State Machine (CEFSM) is used to model the behavior of the infrastructure applications. The model based recovery scheme is integrated in the application...... and uses the client/server model to save the application state information during failure-free execution on a stable storage and retrieve them when needed during recovery. When and what information to be saved/retrieved is determined by the behavioral model of the application. To practically evaluate...

  12. Software Infrastructure for exploratory visualization and data analysis: past, present, and future

    International Nuclear Information System (INIS)

    Silva, C T; Freire, J

    2008-01-01

    Future advances in science depend on our ability to comprehend the vast amounts of data being produced and acquired, and scientific visualization is a key enabling technology in this endeavor. We posit that visualization should be better integrated with the data exploration process instead of being done after the fact - when all the science is done - simply to generate presentations of the findings. An important barrier to a wider adoption of visualization is complexity: the design of effective visualizations is a complex, multistage process that requires deep understanding of existing techniques, and how they relate to human cognition. We envision visualization software tools evolving into 'scientific discovery' environments that support the creative tasks in the discovery pipeline, from data acquisition and simulation to hypothesis testing and evaluation, and that enable the publication of results that can be reproduced and verified

  13. Data quality can make or break a research infrastructure

    Science.gov (United States)

    Pastorello, G.; Gunter, D.; Chu, H.; Christianson, D. S.; Trotta, C.; Canfora, E.; Faybishenko, B.; Cheah, Y. W.; Beekwilder, N.; Chan, S.; Dengel, S.; Keenan, T. F.; O'Brien, F.; Elbashandy, A.; Poindexter, C.; Humphrey, M.; Papale, D.; Agarwal, D.

    2017-12-01

    Research infrastructures (RIs) commonly support observational data provided by multiple, independent sources. Uniformity in the data distributed by such RIs is important in most applications, e.g., in comparative studies using data from two or more sources. Achieving uniformity in terms of data quality is challenging, especially considering that many data issues are unpredictable and cannot be detected until a first occurrence of the issue. With that, many data quality control activities within RIs require a manual, human-in-the-loop element, making it an expensive activity. Our motivating example is the FLUXNET2015 dataset - a collection of ecosystem-level carbon, water, and energy fluxes between land and atmosphere from over 200 sites around the world, some sites with over 20 years of data. About 90% of the human effort to create the dataset was spent in data quality related activities. Based on this experience, we have been working on solutions to increase the automation of data quality control procedures. Since it is nearly impossible to fully automate all quality related checks, we have been drawing from the experience with techniques used in software development, which shares a few common constraints. In both managing scientific data and writing software, human time is a precious resource; code bases, as Science datasets, can be large, complex, and full of errors; both scientific and software endeavors can be pursued by individuals, but collaborative teams can accomplish a lot more. The lucrative and fast-paced nature of the software industry fueled the creation of methods and tools to increase automation and productivity within these constraints. Issue tracking systems, methods for translating problems into automated tests, powerful version control tools are a few examples. Terrestrial and aquatic ecosystems research relies heavily on many types of observational data. As volumes of data collection increases, ensuring data quality is becoming an unwieldy

  14. Optimally Reorganizing Navy Shore Infrastructure

    National Research Council Canada - National Science Library

    Kerman, Mitchell

    1997-01-01

    ...), but infrastructure reductions continue to lag force structure reductions. The United States Navy's recent initiatives to reduce its shore infrastructure costs include "regionalization", "outsourcing," and "homebasing...

  15. SOFTWARE OPEN SOURCE, SOFTWARE GRATIS?

    Directory of Open Access Journals (Sweden)

    Nur Aini Rakhmawati

    2006-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 Berlakunya Undang – undang Hak Atas Kekayaan Intelektual (HAKI, memunculkan suatu alternatif baru untuk menggunakan software open source. Penggunaan software open source menyebar seiring dengan isu global pada Information Communication Technology (ICT saat ini. Beberapa organisasi dan perusahaan mulai menjadikan software open source sebagai pertimbangan. Banyak konsep mengenai software open source ini. Mulai dari software yang gratis sampai software tidak berlisensi. Tidak sepenuhnya isu software open source benar, untuk itu perlu dikenalkan konsep software open source mulai dari sejarah, lisensi dan bagaimana cara memilih lisensi, serta pertimbangan dalam memilih software open source yang ada. Kata kunci :Lisensi, Open Source, HAKI

  16. ActivitySim: large-scale agent based activity generation for infrastructure simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gali, Emmanuel [Los Alamos National Laboratory; Eidenbenz, Stephan [Los Alamos National Laboratory; Mniszewski, Sue [Los Alamos National Laboratory; Cuellar, Leticia [Los Alamos National Laboratory; Teuscher, Christof [PORTLAND STATE UNIV

    2008-01-01

    The United States' Department of Homeland Security aims to model, simulate, and analyze critical infrastructure and their interdependencies across multiple sectors such as electric power, telecommunications, water distribution, transportation, etc. We introduce ActivitySim, an activity simulator for a population of millions of individual agents each characterized by a set of demographic attributes that is based on US census data. ActivitySim generates daily schedules for each agent that consists of a sequence of activities, such as sleeping, shopping, working etc., each being scheduled at a geographic location, such as businesses or private residences that is appropriate for the activity type and for the personal situation of the agent. ActivitySim has been developed as part of a larger effort to understand the interdependencies among national infrastructure networks and their demand profiles that emerge from the different activities of individuals in baseline scenarios as well as emergency scenarios, such as hurricane evacuations. We present the scalable software engineering principles underlying ActivitySim, the socia-technical modeling paradigms that drive the activity generation, and proof-of-principle results for a scenario in the Twin Cities, MN area of 2.6 M agents.

  17. Developing Sustainable Urban Water-Energy Infrastructures: Applying a Multi-Sectoral Social-Ecological-Infrastructural Systems (SEIS) Framework

    Science.gov (United States)

    Ramaswami, A.

    2016-12-01

    Urban infrastructure - broadly defined to include the systems that provide water, energy, food, shelter, transportation-communication, sanitation and green/public spaces in cities - have tremendous impact on the environment and on human well-being (Ramaswami et al., 2016; Ramaswami et al., 2012). Aggregated globally, these sectors contribute 90% of global greenhouse gas (GHG) emissions and 96% of global water withdrawals. Urban infrastructure contributions to such impacts are beginning to dominate. Cities are therefore becoming the action arena for infrastructure transformations that can achieve high levels of service delivery while reducing environmental impacts and enhancing human well-being. Achieving sustainable urban infrastructure transitions requires: information about the engineered infrastructure, and its interaction with the natural (ecological-environmental) and the social sub-systems In this paper, we apply a multi-sector, multi-scalar Social-Ecological-Infrastructural Systems framework that describes the interactions among biophysical engineered infrastructures, the natural environment and the social system in a systems-approach to inform urban infrastructure transformations. We apply the SEIS framework to inform water and energy sector transformations in cities to achieve environmental and human health benefits realized at multiple scales - local, regional and global. Local scales address pollution, health, wellbeing and inequity within the city; regional scales address regional pollution, scarcity, as well as supply risks in the water-energy sectors; global impacts include greenhouse gas emissions and climate impacts. Different actors shape infrastructure transitions including households, businesses, and policy actors. We describe the development of novel cross-sectoral strategies at the water-energy nexus in cities, focusing on water, waste and energy sectors, in a case study of Delhi, India. Ramaswami, A.; Russell, A.G.; Culligan, P.J.; Sharma, K

  18. A comprehensive Software Copy Protection and Digital Rights Management platform

    Directory of Open Access Journals (Sweden)

    Ayman Mohammad Bahaa-Eldin

    2014-09-01

    Full Text Available This article proposes a Powerful and Flexible System for Software Copy Protection (SCP and Digital Rights Management (DRM based on Public Key Infrastructure (PKI standards. Software protection is achieved through a multi-phase methodology with both static and dynamic processing of the executable file. The system defeats most of the attacks and cracking techniques and makes sure that the protected software is never in a flat form, with a suitable portion of it always being encrypted during execution. A novel performance-tuning algorithm is proposed to lower the overhead of the protection process to its minimum depending on the software dynamic execution behavior. All system calls to access resources and objects such as files, and input/output devices are intercepted and encapsulated with secure rights management code to enforce the required license model. The system can be integrated with hardware authentication techniques (like dongles, and to Internet based activation and DRM servers over the cloud. The system is flexible to apply any model of licensing including state-based license such as expiration dates and number of trials. The usage of a standard markup language (XrML to describe the license makes it easier to apply new licensing operations like re-sale and content rental.

  19. Site Support Program Plan Infrastructure Program

    International Nuclear Information System (INIS)

    1995-01-01

    The Fiscal Year 1996 Infrastructure Program Site Support Program Plan addresses the mission objectives, workscope, work breakdown structures (WBS), management approach, and resource requirements for the Infrastructure Program. Attached to the plan are appendices that provide more detailed information associated with scope definition. The Hanford Site's infrastructure has served the Site for nearly 50 years during defense materials production. Now with the challenges of the new environmental cleanup mission, Hanford's infrastructure must meet current and future mission needs in a constrained budget environment, while complying with more stringent environmental, safety, and health regulations. The infrastructure requires upgrading, streamlining, and enhancement in order to successfully support the site mission of cleaning up the Site, research and development, and economic transition

  20. Clarkesville Green Infrastructure Implementation Strategy

    Science.gov (United States)

    The report outlines the 2012 technical assistance for Clarkesville, GA to develop a Green Infrastructure Implementation Strategy, which provides the basic building blocks for a green infrastructure plan:

  1. Railway infrastructure security

    CERN Document Server

    Sforza, Antonio; Vittorini, Valeria; Pragliola, Concetta

    2015-01-01

    This comprehensive monograph addresses crucial issues in the protection of railway systems, with the objective of enhancing the understanding of railway infrastructure security. Based on analyses by academics, technology providers, and railway operators, it explains how to assess terrorist and criminal threats, design countermeasures, and implement effective security strategies. In so doing, it draws upon a range of experiences from different countries in Europe and beyond. The book is the first to be devoted entirely to this subject. It will serve as a timely reminder of the attractiveness of the railway infrastructure system as a target for criminals and terrorists and, more importantly, as a valuable resource for stakeholders and professionals in the railway security field aiming to develop effective security based on a mix of methodological, technological, and organizational tools. Besides researchers and decision makers in the field, the book will appeal to students interested in critical infrastructur...

  2. Crossing the borders and the cultural gaps for educating PhDs in software engineering

    DEFF Research Database (Denmark)

    Sørensen, Lene Tolstrup; Knutas, Antti; Seffah, Ahmed

    2017-01-01

    PhDs and educators. While large universities and research centres have the required expertise and infrastructure to providing a cost-effective training by research as well as covering wide spectrum of software engineering topics, the situation in small universities with limited resources...... is challenging. This is even more difficult for some countries where the discipline of software engineering is totally new, which is the case of emerging countries. This paper describes the Pathways to PhDs project funded by the European Commission. The long-term aim is to support the development, modernization...... and international visibility and excellence of higher education, namely education by research at the PhD level in Europe, while helping partner countries to develop new PhD programs and consolidate existing ones in the field of computing in the area of software engineering. This paper presents the creation...

  3. Network science, nonlinear science and infrastructure systems

    CERN Document Server

    2007-01-01

    Network Science, Nonlinear Science and Infrastructure Systems has been written by leading scholars in these areas. Its express purpose is to develop common theoretical underpinnings to better solve modern infrastructural problems. It is felt by many who work in these fields that many modern communication problems, ranging from transportation networks to telecommunications, Internet, supply chains, etc., are fundamentally infrastructure problems. Moreover, these infrastructure problems would benefit greatly from a confluence of theoretical and methodological work done with the areas of Network Science, Dynamical Systems and Nonlinear Science. This book is dedicated to the formulation of infrastructural tools that will better solve these types of infrastructural problems. .

  4. Development of a Free-Flight Simulation Infrastructure

    Science.gov (United States)

    Miles, Eric S.; Wing, David J.; Davis, Paul C.

    1999-01-01

    In anticipation of a projected rise in demand for air transportation, NASA and the FAA are researching new air-traffic-management (ATM) concepts that fall under the paradigm known broadly as ":free flight". This paper documents the software development and engineering efforts in progress by Seagull Technology, to develop a free-flight simulation (FFSIM) that is intended to help NASA researchers test mature-state concepts for free flight, otherwise referred to in this paper as distributed air / ground traffic management (DAG TM). Under development is a distributed, human-in-the-loop simulation tool that is comprehensive in its consideration of current and envisioned communication, navigation and surveillance (CNS) components, and will allow evaluation of critical air and ground traffic management technologies from an overall systems perspective. The FFSIM infrastructure is designed to incorporate all three major components of the ATM triad: aircraft flight decks, air traffic control (ATC), and (eventually) airline operational control (AOC) centers.

  5. Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS

    International Nuclear Information System (INIS)

    McKee, Shawn; Lake, Andrew; Laurens, Philippe; Severini, Horst; Wlodek, Tomasz; Wolff, Stephen; Zurawski, Jason

    2012-01-01

    Global scientific collaborations, such as ATLAS, continue to push the network requirements envelope. Data movement in this collaboration is routinely including the regular exchange of petabytes of datasets between the collection and analysis facilities in the coming years. These requirements place a high emphasis on networks functioning at peak efficiency and availability; the lack thereof could mean critical delays in the overall scientific progress of distributed data-intensive experiments like ATLAS. Network operations staff routinely must deal with problems deep in the infrastructure; this may be as benign as replacing a failing piece of equipment, or as complex as dealing with a multi-domain path that is experiencing data loss. In either case, it is crucial that effective monitoring and performance analysis tools are available to ease the burden of management. We will report on our experiences deploying and using the perfSONAR-PS Performance Toolkit at ATLAS sites in the United States. This software creates a dedicated monitoring server, capable of collecting and performing a wide range of passive and active network measurements. Each independent instance is managed locally, but able to federate on a global scale; enabling a full view of the network infrastructure that spans domain boundaries. This information, available through web service interfaces, can easily be retrieved to create customized applications. The US ATLAS collaboration has developed a centralized “dashboard” offering network administrators, users, and decision makers the ability to see the performance of the network at a glance. The dashboard framework includes the ability to notify users (alarm) when problems are found, thus allowing rapid response to potential problems and making perfSONAR-PS crucial to the operation of our distributed computing infrastructure.

  6. Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures

    International Nuclear Information System (INIS)

    Field, L; Gronager, M; Johansson, D; Kleist, J

    2010-01-01

    Interoperability of grid infrastructures is becoming increasingly important in the emergence of large scale grid infrastructures based on national and regional initiatives. To achieve interoperability of grid infrastructures adaptions and bridging of many different systems and services needs to be tackled. A grid infrastructure offers services for authentication, authorization, accounting, monitoring, operation besides from the services for handling and data and computations. This paper presents an outline of the work done to integrate the Nordic Tier-1 and 2s, which for the compute part is based on the ARC middleware, into the WLCG grid infrastructure co-operated by the EGEE project. Especially, a throughout description of integration of the compute services is presented.

  7. Site Support Program Plan Infrastructure Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-09-26

    The Fiscal Year 1996 Infrastructure Program Site Support Program Plan addresses the mission objectives, workscope, work breakdown structures (WBS), management approach, and resource requirements for the Infrastructure Program. Attached to the plan are appendices that provide more detailed information associated with scope definition. The Hanford Site`s infrastructure has served the Site for nearly 50 years during defense materials production. Now with the challenges of the new environmental cleanup mission, Hanford`s infrastructure must meet current and future mission needs in a constrained budget environment, while complying with more stringent environmental, safety, and health regulations. The infrastructure requires upgrading, streamlining, and enhancement in order to successfully support the site mission of cleaning up the Site, research and development, and economic transition.

  8. Automated Testing Infrastructure and Result Comparison for Geodynamics Codes

    Science.gov (United States)

    Heien, E. M.; Kellogg, L. H.

    2013-12-01

    The geodynamics community uses a wide variety of codes on a wide variety of both software and hardware platforms to simulate geophysical phenomenon. These codes are generally variants of finite difference or finite element calculations involving Stokes flow or wave propagation. A significant problem is that codes of even low complexity will return different results depending on the platform due to slight differences in hardware, software, compiler, and libraries. Furthermore, changes to the codes during development may affect solutions in unexpected ways such that previously validated results are altered. The Computational Infrastructure for Geodynamics (CIG) is funded by the NSF to enhance the capabilities of the geodynamics community through software development. CIG has recently done extensive work in setting up an automated testing and result validation system based on the BaTLab system developed at the University of Wisconsin, Madison. This system uses 16 variants of Linux and Mac platforms on both 32 and 64-bit processors to test several CIG codes, and has also recently been extended to support testing on the XSEDE TACC (Texas Advanced Computing Center) Stampede cluster. In this work we overview the system design and demonstrate how automated testing and validation occurs and results are reported. We also examine several results from the system from different codes and discuss how changes in compilers and libraries affect the results. Finally we detail some result comparison tools for different types of output (scalar fields, velocity fields, seismogram data), and discuss within what margins different results can be considered equivalent.

  9. Regional Social Infrastructure Management as the Instrument for Improving the Quality of Life in the Ural Federal District

    Directory of Open Access Journals (Sweden)

    Valentina Sergeyevna Antonyuk

    2015-09-01

    Full Text Available The article analyzes the processes of the social sphere and the effective operation of social infrastructure in order to improve the quality of life of the population in the Russian regions. Particular attention is paid to the role of the organizational and managerial component affecting usage performance of infrastructure objects and including regulation of the institutions of social infrastructure, planning and software. The purpose of the study is to evaluate the effectiveness of management of social infrastructure through the conjugation of immediate results (the dynamics of indicators of social services and outcomes (parameters of the quality of life of the population. The hypothesis of the study was the violation of the principle of infra-systematicity in the infrastructural support of the improvement of the quality of life in the Russian regions, due to the lack of effectiveness of public administration. In the study, the following methodological approaches are used: structural, factoral, systematic and evolutionary approaches to justify the conception, develop methodology and determine the impact of changes in the parameters of social infrastructure availability for the provided services, shifts in indexes of quality of life. The paper proposes the quantitative evaluation of the effectiveness of organizational management based on the diagnosis of the adequacy of the implementation of the principle of infra-systematicity in the functioning of social infrastructure on the basis of the elasticity coefficient. The proposed approach and the received analytical data on health, education, commerce, housing, culture and sport fields have allowed to range the regions of the Ural Federal District and highlight the areas of insufficient effectiveness of the organizational and management tool for improvement of life quality. The findings of the research may serve as a core for practical recommendations for executive bodies of administrative units of

  10. Eucalyptus: an open-source cloud computing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Nurmi, Daniel; Wolski, Rich; Grzegorczyk, Chris; Obertelli, Graziano; Soman, Sunil; Youseff, Lamia; Zagorodnov, Dmitrii, E-mail: rich@cs.ucsb.ed [Computer Science Department, University of California, Santa Barbara, CA 93106 (United States) and Eucalyptus Systems Inc., 130 Castilian Dr., Goleta, CA 93117 (United States)

    2009-07-01

    Utility computing, elastic computing, and cloud computing are all terms that refer to the concept of dynamically provisioning processing time and storage space from a ubiquitous 'cloud' of computational resources. Such systems allow users to acquire and release the resources on demand and provide ready access to data from processing elements, while relegating the physical location and exact parameters of the resources. Over the past few years, such systems have become increasingly popular, but nearly all current cloud computing offerings are either proprietary or depend upon software infrastructure that is invisible to the research community. In this work, we present Eucalyptus, an open-source software implementation of cloud computing that utilizes compute resources that are typically available to researchers, such as clusters and workstation farms. In order to foster community research exploration of cloud computing systems, the design of Eucalyptus emphasizes modularity, allowing researchers to experiment with their own security, scalability, scheduling, and interface implementations. In this paper, we outline the design of Eucalyptus, describe our own implementations of the modular system components, and provide results from experiments that measure performance and scalability of a Eucalyptus installation currently deployed for public use. The main contribution of our work is the presentation of the first research-oriented open-source cloud computing system focused on enabling methodical investigations into the programming, administration, and deployment of systems exploring this novel distributed computing model.

  11. Exploring Cognition Using Software Defined Radios for NASA Missions

    Science.gov (United States)

    Mortensen, Dale J.; Reinhart, Richard C.

    2016-01-01

    NASA missions typically operate using a communication infrastructure that requires significant schedule planning with limited flexibility when the needs of the mission change. Parameters such as modulation, coding scheme, frequency, and data rate are fixed for the life of the mission. This is due to antiquated hardware and software for both the space and ground assets and a very complex set of mission profiles. Automated techniques in place by commercial telecommunication companies are being explored by NASA to determine their usability by NASA to reduce cost and increase science return. Adding cognition the ability to learn from past decisions and adjust behavior is also being investigated. Software Defined Radios are an ideal way to implement cognitive concepts. Cognition can be considered in many different aspects of the communication system. Radio functions, such as frequency, modulation, data rate, coding and filters can be adjusted based on measurements of signal degradation. Data delivery mechanisms and route changes based on past successes and failures can be made to more efficiently deliver the data to the end user. Automated antenna pointing can be added to improve gain, coverage, or adjust the target. Scheduling improvements and automation to reduce the dependence on humans provide more flexible capabilities. The Cognitive Communications project, funded by the Space Communication and Navigation Program, is exploring these concepts and using the SCaN Testbed on board the International Space Station to implement them as they evolve. The SCaN Testbed contains three Software Defined Radios and a flight computer. These four computing platforms, along with a tracking antenna system and the supporting ground infrastructure, will be used to implement various concepts in a system similar to those used by missions. Multiple universities and SBIR companies are supporting this investigation. This paper will describe the cognitive system ideas under consideration and

  12. Linear infrastructure impacts on landscape hydrology.

    Science.gov (United States)

    Raiter, Keren G; Prober, Suzanne M; Possingham, Hugh P; Westcott, Fiona; Hobbs, Richard J

    2018-01-15

    The extent of roads and other forms of linear infrastructure is burgeoning worldwide, but their impacts are inadequately understood and thus poorly mitigated. Previous studies have identified many potential impacts, including alterations to the hydrological functions and soil processes upon which ecosystems depend. However, these impacts have seldom been quantified at a regional level, particularly in arid and semi-arid systems where the gap in knowledge is the greatest, and impacts potentially the most severe. To explore the effects of extensive track, road, and rail networks on surface hydrology at a regional level we assessed over 1000 km of linear infrastructure, including approx. 300 locations where ephemeral streams crossed linear infrastructure, in the largely intact landscapes of Australia's Great Western Woodlands. We found a high level of association between linear infrastructure and altered surface hydrology, with erosion and pooling 5 and 6 times as likely to occur on-road than off-road on average (1.06 erosional and 0.69 pooling features km -1 on vehicle tracks, compared with 0.22 and 0.12 km -1 , off-road, respectively). Erosion severity was greater in the presence of tracks, and 98% of crossings of ephemeral streamlines showed some evidence of impact on water movement (flow impedance (62%); diversion of flows (73%); flow concentration (76%); and/or channel initiation (31%)). Infrastructure type, pastoral land use, culvert presence, soil clay content and erodibility, mean annual rainfall, rainfall erosivity, topography and bare soil cover influenced the frequency and severity of these impacts. We conclude that linear infrastructure frequently affects ephemeral stream flows and intercepts natural overland and near-surface flows, artificially changing site-scale moisture regimes, with some parts of the landscape becoming abnormally wet and other parts becoming water-starved. In addition, linear infrastructure frequently triggers or exacerbates erosion

  13. Vehicle speed guidance strategy at signalized intersection based on cooperative vehicle infrastructure system

    Directory of Open Access Journals (Sweden)

    Fengyuan JIA

    2017-10-01

    Full Text Available In order to reduce stopping time of vehicle at a signalized intersection, aiming at the difficulty, even the impossibility to obtain real-time queue length of intersection in third and fourth-tier cities in China sometimes, a speed guidance strategy based on cooperative vehicle infrastructure system is put forward and studied. For validating the strategy, the traffic signal timing data of the intersection at Hengshan Road and North Fengming Lake Road in Wuhu is collected by a vehicular traffic signal reminder system which is designed. The simulation experiments using the acquired data are done by software VISSIM. The simulation results demonstrate that the strategy under high and low traffic flow can effectively decrease the link travel-time, reducing average ratio is 9.2 % and 13.0 %, respectively, and the effect under low traffic flow is better than that under high traffic flow. The strategy improves efficiency of traffic at a signalized intersection and provides an idea for the application of vehicle speed guidance based on cooperative vehicle infrastructure system.

  14. IHE cross-enterprise document sharing for imaging: interoperability testing software

    Directory of Open Access Journals (Sweden)

    Renaud Bérubé

    2010-09-01

    Full Text Available Abstract Background With the deployments of Electronic Health Records (EHR, interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  15. Indonesian infrastructure development

    International Nuclear Information System (INIS)

    Djojohadikusumo, H.S.

    1991-01-01

    It is with the achievement of a competitive advantage as a motivating factor that the Indonesian coal industry is engaged in infrastructure development including both small regionally trade-based terminals and high capacity capesize bulk terminals to support large scale coal exports. The unique characteristics of Indonesian coal quality, low production costs and the optimization of transport economics in accordance with vessel size provides great incentives for the European and U.S. market. This paper reports on the infrastructure development, Indonesian coal resources, and coal exports

  16. VADMC: The Infrastructure

    Directory of Open Access Journals (Sweden)

    Le Sidaner Pierre

    2012-09-01

    Full Text Available The Virtual Atomic and Molecular Data Centre (VAMDC; http://www.vamdc.eu is a European-Union-funded collaboration between several groups involved in the generation, evaluation, and use of atomic and molecular data. VAMDC aims at building a secure, documented, flexible and interoperable e-Science environment-based interface to existing atomic and molecular databases. The global infrastructure of this project uses technologies derived from the International Virtual Observatory Alliance (IVOA. The infrastructure, as well as the first database prototypes will be described.

  17. Momentum in Transformation of Technical Infrastructure

    DEFF Research Database (Denmark)

    Nielsen, Susanne Balslev; Elle, Morten

    1999-01-01

    Current infrastructure holds a considerable momentum and this momentum is a barrier of transformation towards more sustainable technologies and more sustainable styles of network management. Using the sewage sector in Denmark as an example of a technical infrastructure system this paper argues...... that there are technical, economical and social aspects of the current infrastructures momentum....

  18. Investigating interoperability of the LSST data management software stack with Astropy

    Science.gov (United States)

    Jenness, Tim; Bosch, James; Owen, Russell; Parejko, John; Sick, Jonathan; Swinbank, John; de Val-Borro, Miguel; Dubois-Felsmann, Gregory; Lim, K.-T.; Lupton, Robert H.; Schellart, Pim; Krughoff, K. S.; Tollerud, Erik J.

    2016-07-01

    The Large Synoptic Survey Telescope (LSST) will be an 8.4m optical survey telescope sited in Chile and capable of imaging the entire sky twice a week. The data rate of approximately 15TB per night and the requirements to both issue alerts on transient sources within 60 seconds of observing and create annual data releases means that automated data management systems and data processing pipelines are a key deliverable of the LSST construction project. The LSST data management software has been in development since 2004 and is based on a C++ core with a Python control layer. The software consists of nearly a quarter of a million lines of code covering the system from fundamental WCS and table libraries to pipeline environments and distributed process execution. The Astropy project began in 2011 as an attempt to bring together disparate open source Python projects and build a core standard infrastructure that can be used and built upon by the astronomy community. This project has been phenomenally successful in the years since it has begun and has grown to be the de facto standard for Python software in astronomy. Astropy brings with it considerable expectations from the community on how astronomy Python software should be developed and it is clear that by the time LSST is fully operational in the 2020s many of the prospective users of the LSST software stack will expect it to be fully interoperable with Astropy. In this paper we describe the overlap between the LSST science pipeline software and Astropy software and investigate areas where the LSST software provides new functionality. We also discuss the possibilities of re-engineering the LSST science pipeline software to build upon Astropy, including the option of contributing affliated packages.

  19. HwPMI: An Extensible Performance Monitoring Infrastructure for Improving Hardware Design and Productivity on FPGAs

    Directory of Open Access Journals (Sweden)

    Andrew G. Schmidt

    2012-01-01

    Full Text Available Designing hardware cores for FPGAs can quickly become a complicated task, difficult even for experienced engineers. With the addition of more sophisticated development tools and maturing high-level language-to-gates techniques, designs can be rapidly assembled; however, when the design is evaluated on the FPGA, the performance may not be what was expected. Therefore, an engineer may need to augment the design to include performance monitors to better understand the bottlenecks in the system or to aid in the debugging of the design. Unfortunately, identifying what to monitor and adding the infrastructure to retrieve the monitored data can be a challenging and time-consuming task. Our work alleviates this effort. We present the Hardware Performance Monitoring Infrastructure (HwPMI, which includes a collection of software tools and hardware cores that can be used to profile the current design, recommend and insert performance monitors directly into the HDL or netlist, and retrieve the monitored data with minimal invasiveness to the design. Three applications are used to demonstrate and evaluate HwPMI’s capabilities. The results are highly encouraging as the infrastructure adds numerous capabilities while requiring minimal effort by the designer and low resource overhead to the existing design.

  20. NSLS-II High Level Application Infrastructure And Client API Design

    International Nuclear Information System (INIS)

    Shen, G.; Yang, L.; Shroff, K.

    2011-01-01

    The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. It is an open structure platform, and we try to provide a narrow API set for client application. With this narrow API, existing applications developed in different language under different architecture could be ported to our platform with small modification. This paper describes system infrastructure design, client API and system integration, and latest progress. As a new 3rd generation synchrotron light source with ultra low emittance, there are new requirements and challenges to control and manipulate the beam. A use case study and a theoretical analysis have been performed to clarify requirements and challenges to the high level applications (HLA) software environment. To satisfy those requirements and challenges, adequate system architecture of the software framework is critical for beam commissioning, study and operation. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating, plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service oriented architecture technology. The HLA is combination of tools for accelerator physicists and operators, which is same as traditional approach. In NSLS-II, they include monitoring applications and control routines. Scripting environment is very important for the later part of HLA and both parts are designed based on a common set of APIs. Physicists and operators are users of these APIs, while control system engineers and a few accelerator physicists are the developers of these APIs. With our Client/Server mode based approach, we leave how to retrieve information to the

  1. European environmental research infrastructures are going for common 30 years strategy

    Science.gov (United States)

    Asmi, Ari; Konjin, Jacco; Pursula, Antti

    2014-05-01

    Environmental Research infrastructures are facilities, resources, systems and related services that are used by research communities to conduct top-level research. Environmental research is addressing processes at very different time scales, and supporting research infrastructures must be designed as long-term facilities in order to meet the requirements of continuous environmental observation, measurement and analysis. This longevity makes the environmental research infrastructures ideal structures to support the long-term development in environmental sciences. ENVRI project is a collaborative action of the major European (ESFRI) Environmental Research Infrastructures working towards increased co-operation and interoperability between the infrastructures. One of the key products of the ENVRI project is to combine the long-term plans of the individual infrastructures towards a common strategy, describing the vision and planned actions. The envisaged vision for environmental research infrastructures toward 2030 is to support the holistic understanding of our planet and it's behavior. The development of a 'Standard Model of the Planet' is a common ambition, a challenge to define an environmental standard model; a framework of all interactions within the Earth System, from solid earth to near space. Indeed scientists feel challenged to contribute to a 'Standard Model of the Planet' with data, models, algorithms and discoveries. Understanding the Earth System as an interlinked system requires a systems approach. The Environmental Sciences are rapidly moving to become a one system-level science. Mainly since modern science, engineering and society are increasingly facing complex problems that can only be understood in the context of the full overall system. The strategy of the supporting collaborating research infrastructures is based on developing three key factors for the Environmental Sciences: the technological, the cultural and the human capital. The technological

  2. Nested barriers to low-carbon infrastructure investment

    Science.gov (United States)

    Granoff, Ilmi; Hogarth, J. Ryan; Miller, Alan

    2016-12-01

    Low-carbon, 'green' economic growth is necessary to simultaneously improve human welfare and avoid the worst impacts of climate change and environmental degradation. Infrastructure choices underpin both the growth and the carbon intensity of the economy. This Perspective explores the barriers to investing in low-carbon infrastructure and some of the policy levers available to overcome them. The barriers to decarbonizing infrastructure 'nest' within a set of barriers to infrastructure development more generally that cause spending on infrastructure--low-carbon or not--to fall more than 70% short of optimal levels. Developing countries face additional barriers such as currency and political risks that increase the investment gap. Low-carbon alternatives face further barriers, such as commercialization risk and financial and public institutions designed for different investment needs. While the broader barriers to infrastructure investment are discussed in other streams of literature, they are often disregarded in literature on renewable energy diffusion or climate finance, which tends to focus narrowly on the project costs of low- versus high-carbon options. We discuss how to overcome the barriers specific to low-carbon infrastructure within the context of the broader infrastructure gap.

  3. Developing a grid infrastructure in Cuba

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Aldama, D.; Dominguez, M.; Ricardo, H.; Gonzalez, A.; Nolasco, E.; Fernandez, E.; Fernandez, M.; Sanchez, M.; Suarez, F.; Nodarse, F.; Moreno, N.; Aguilera, L.

    2007-07-01

    A grid infrastructure was deployed at Centro de Gestion de la Informacion y Desarrollo de la Energia (CUBAENERGIA) in the frame of EELA project and of a national initiative for developing a Cuban Network for Science. A stand-alone model was adopted to overcome connectivity limitations. The e-infrastructure is based on gLite-3.0 middleware and is fully compatible with EELA-infrastructure. Afterwards, the work was focused on grid applications. The application GATE was deployed from the early beginning for biomedical users. Further, two applications were deployed on the local grid infrastructure: MOODLE for e-learning and AERMOD for assessment of local dispersion of atmospheric pollutants. Additionally, our local grid infrastructure was made interoperable with a Java based distributed system for bioinformatics calculations. This experience could be considered as a suitable approach for national networks with weak Internet connections. (Author)

  4. Building Resilient Cloud Over Unreliable Commodity Infrastructure

    OpenAIRE

    Kedia, Piyus; Bansal, Sorav; Deshpande, Deepak; Iyer, Sreekanth

    2012-01-01

    Cloud Computing has emerged as a successful computing paradigm for efficiently utilizing managed compute infrastructure such as high speed rack-mounted servers, connected with high speed networking, and reliable storage. Usually such infrastructure is dedicated, physically secured and has reliable power and networking infrastructure. However, much of our idle compute capacity is present in unmanaged infrastructure like idle desktops, lab machines, physically distant server machines, and lapto...

  5. Siting Urban Agriculture as a Green Infrastructure Strategy for Land Use Planning in Austin, TX

    Directory of Open Access Journals (Sweden)

    Charles M. Rogers

    2016-04-01

    Full Text Available Green infrastructure refers to a type of land use design that mimics the natural water cycle by using the infiltration capacities of vegetation, soils, and other natural processes to mitigate stormwater runoff. As a multifunctional landscape, urban agriculture should be seen as a highly beneficial tool for urban planning not only because of its ability to function as a green stormwater management strategy, but also due to the multiple social and environmental benefits it provides. In 2012, the city of Austin adopted a major planning approach titled the “Imagine Austin Comprehensive Plan” (IACP outlining the city’s vision for future growth and land use up to 2039. The plan explicitly addresses the adoption of green infrastructure as a target for future land use with urban agriculture as a central component. Addressing this area of land use planning will require tools that can locate suitable areas within the city ideal for the development of green infrastructure. In this study, a process was developed to create a spatially explicit method of siting urban agriculture as a green infrastructure tool in hydrologically sensitive areas, or areas prone to runoff, in east Austin. The method uses geospatial software to spatially analyze open access datasets that include land use, a digital elevation model, and prime farmland soils. Through this method a spatial relationship can be made between areas of high surface runoff and where the priority placement of urban farms should be sited as a useful component of green infrastructure. Planners or geospatial analysts could use such information, along with other significant factors and community input, to aid decision makers in the placement of urban agriculture. This spatially explicit approach for siting potential urban farms, will support the integration of urban agriculture as part of the land use planning of Austin.

  6. A Bootstrap Approach of Benchmarking Organizational Maturity Model of Software Product With Educational Maturity Model

    OpenAIRE

    R.Manjula; J.Vaideeswaran

    2012-01-01

    This Software product line engineering is an inter-disciplinary concept. It spans the dimensions of business, architecture, process, and the organization. Similarly, Education System engineering is also an inter-disciplinary concept, which spans the dimensions of academic, infrastructure, facilities, administration etc. Some of the potential benefits of this approach include continuous improvements in System quality and adhering to global standards. The increasing competency in IT and Educati...

  7. Making Energy Infrastructure

    DEFF Research Database (Denmark)

    Schick, Lea; Winthereik, Brit Ross

    2016-01-01

    in a pragmatic present and in an unprecedented future; between being tied to the specific site of the competition and belonging to no place in particular; and not least between being predominantly an art project and primarily an infrastructure project. Remarkable differences between cosmopolitics and smooth...... politics appear here, especially compared to the literature analysing the roles played by art and design when imagining new ways of living with energy. Oscillation between smooth politics and cosmopolitics may provide a generative way forward for actors wishing to engage in the infrastructuring...

  8. Infrastructure expenditures and costs. Practical guidelines to calculate total infrastructure costs for five modes of transport. Final report

    International Nuclear Information System (INIS)

    2005-11-01

    Transport infrastructures in general, and the Trans European Transport Network (TEN-T) in particular, play an important role in achieving the medium and long-term objectives of the European Union. In view of this, the Commission has recently adopted a revision of the guidelines for the TEN-T. The main consequences of this revision are the need for a better understanding of the investments made by the member states in the TEN-T and the need for ensuring optimal consistency in the reporting by the Members States of such investments. With Regulation number 1108/70 the Council of the European Communities introduced an accounting system for expenditure on infrastructure in respect of transport by rail, road and inland waterways. The purpose of this regulation is to introduce a standard and permanent accounting system for infrastructure expenditures. However maritime and aviation infrastructure were not included. Further, the need for an effective and easy to apply classification for infrastructure investments concerning all five transport modes was still pending. Therefore, DG TREN has commissioned ECORYS Transport and CE Delft to study the expenditures and costs of infrastructure, to propose an adequate classification of expenditures, and to propose a method for translating data on expenditures into data on costs. The objectives of the present study are threefold: To set out a classification of infrastructure expenditures, in order to increase knowledge of expenditures related to transport infrastructures. This classification should support a better understanding of fixed and variable infrastructure costs; To detail the various components of such expenditures for five modes of transportation, which would enable the monitoring of infrastructure expenditures and costs; and to set up a methodology to move from annual series of expenditures to costs, including fixed and variable elements.

  9. The Fermilab data storage infrastructure

    International Nuclear Information System (INIS)

    Jon A Bakken et al.

    2003-01-01

    Fermilab, in collaboration with the DESY laboratory in Hamburg, Germany, has created a petabyte scale data storage infrastructure to meet the requirements of experiments to store and access large data sets. The Fermilab data storage infrastructure consists of the following major storage and data transfer components: Enstore mass storage system, DCache distributed data cache, ftp and Grid ftp for primarily external data transfers. This infrastructure provides a data throughput sufficient for transferring data from experiments' data acquisition systems. It also allows access to data in the Grid framework

  10. Flowscapes : Designing infrastructure as landscape

    NARCIS (Netherlands)

    Nijhuis, S.; Jauslin, D.T.; Van der Hoeven, F.D.

    2015-01-01

    Social, cultural and technological developments of our society are demanding a fundamental review of the planning and design of its landscapes and infrastructures, in particular in relation to environmental issues and sustainability. Transportation, green and water infrastructures are important

  11. The RAGE Game Software Components Repository for Supporting Applied Game Development

    Directory of Open Access Journals (Sweden)

    Krassen Stefanov

    2017-09-01

    Full Text Available This paper presents the architecture of the RAGE repository, which is a unique and dedicated infrastructure that provides access to a wide variety of advanced technology components for applied game development. The RAGE project, which is the principal Horizon2020 research and innovation project on applied gaming, develops up to three dozens of software components (RAGE software assets that are reusable across a wide diversity of game engines, game platforms and programming languages. The RAGE repository provides storage space for assets and their artefacts and is designed as an asset life-cycle management system for defining, publishing, updating, searching and packaging for distribution of these assets. It will be embedded in a social platform for asset developers and other users. A dedicated Asset Repository Manager provides the main functionality of the repository and its integration with other systems. Tools supporting the Asset Manager are presented and discussed. When the RAGE repository is in full operation, applied game developers will be able to easily enhance the quality of their games by including selected advanced game software assets. Making available the RAGE repository system and its variety of software assets aims to enhance the coherence and decisiveness of the applied game industry.

  12. Private investments in new infrastructures

    NARCIS (Netherlands)

    Baarsma, B.; Poort, J.P.; Teulings, C.N.; de Nooij, M.

    2004-01-01

    The Lisbon Strategy demands large investments in transport projects, broadband networks and energy infrastructure. Despite the widely-acknowledged need for investments in new infrastructures, European and national public funds are scarce in the current economic climate. Moreover, both policy-makers

  13. Infrastructuring When You Don’t

    DEFF Research Database (Denmark)

    Bolmsten, Johan; Dittrich, Yvonne

    2011-01-01

    infrastructures. Such infrastructures enable integration between different applications and tasks but, at the same time, introduce constraints to ensure interoperability. How can the ad vantages of End-User Development be kept without jeopardizing the integration between different applications? The article...

  14. N2R vs. DR Network Infrastructure Evaluation

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Roost, Lars Jessen; Toft, Per Nesager

    2007-01-01

    Recent development of Internet-based services has set higher requirements to network infrastructures in terms of more bandwidth, lower delays and more reliability. Theoretical research within the area of Structural Quality of Service (SQoS) has introduced a new type of infrastructure which meet...... these requirements: N2R infrastructures. This paper contributes to the ongoing research with a case study from North Jutland. An evaluation of three N2R infrastructures compared to a Double Ring (DR) infrastructure will provide valuable information of the practical applicability of N2R infrastructures. In order...... to study if N2R infrastructures perform better than the DR infrastructure, a distribution network was established based on geographical information system (GIS) data. Nodes were placed with respect to demographic and geographical factors. The established distribution network was investigated with respect...

  15. Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing

    Science.gov (United States)

    Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.

    2010-01-01

    The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development

  16. Perm State University HPC-hardware and software services: capabilities for aircraft engine aeroacoustics problems solving

    Science.gov (United States)

    Demenev, A. G.

    2018-02-01

    The present work is devoted to analyze high-performance computing (HPC) infrastructure capabilities for aircraft engine aeroacoustics problems solving at Perm State University. We explore here the ability to develop new computational aeroacoustics methods/solvers for computer-aided engineering (CAE) systems to handle complicated industrial problems of engine noise prediction. Leading aircraft engine engineering company, including “UEC-Aviadvigatel” JSC (our industrial partners in Perm, Russia), require that methods/solvers to optimize geometry of aircraft engine for fan noise reduction. We analysed Perm State University HPC-hardware resources and software services to use efficiently. The performed results demonstrate that Perm State University HPC-infrastructure are mature enough to face out industrial-like problems of development CAE-system with HPC-method and CFD-solvers.

  17. SCaN Testbed Software Development and Lessons Learned

    Science.gov (United States)

    Kacpura, Thomas J.; Varga, Denise M.

    2012-01-01

    National Aeronautics and Space Administration (NASA) has developed an on-orbit, adaptable, Software Defined Radio (SDR)Space Telecommunications Radio System (STRS)-based testbed facility to conduct a suite of experiments to advance technologies, reduce risk, and enable future mission capabilities on the International Space Station (ISS). The SCAN Testbed Project will provide NASA, industry, other Government agencies, and academic partners the opportunity to develop and field communications, navigation, and networking technologies in the laboratory and space environment based on reconfigurable, SDR platforms and the STRS Architecture.The SDRs are a new technology for NASA, and the support infrastructure they require is different from legacy, fixed function radios. SDRs offer the ability to reconfigure on-orbit communications by changing software for new waveforms and operating systems to enable new capabilities or fix any anomalies, which was not a previous option. They are not stand alone devices, but required a new approach to effectively control them and flow data. This requires extensive software to be developed to utilize the full potential of these reconfigurable platforms. The paper focuses on development, integration and testing as related to the avionics processor system, and the software required to command, control, monitor, and interact with the SDRs, as well as the other communication payload elements. An extensive effort was required to develop the flight software and meet the NASA requirements for software quality and safety. The flight avionics must be radiation tolerant, and these processors have limited capability in comparison to terrestrial counterparts. A big challenge was that there are three SDRs onboard, and interfacing with multiple SDRs simultaneously complicatesd the effort. The effort also includes ground software, which is a key element for both the command of the payload, and displaying data created by the payload. The verification of

  18. E-mobility charging infrastructure. Wish and reality

    Energy Technology Data Exchange (ETDEWEB)

    Wunnerlich, Stephan [EnBW Energie Baden-Wuerttemberg AG, Karlsruhe (Germany)

    2013-06-01

    An adequate charging infrastructure for electric vehicles is necessary for the success of electric vehicles. The wishful thinking is, to build up quickly a charging infrastructure to the electric vehicles since they will be launched. The wishful thinking is to build up a cheap and easy to handle infrastructure in order to keep it cheap and simple for the customer. The wishful thinking is that the process of building up such infrastructure is smooth and based on clear rules, regulations and standards. The wishful thinking is that public charging infrastructure operators can earn money with the sales of kWh or with marketing their public charging stations. Reality shows a different picture. Public charging Infrastructure is expensive to install and to manage, public charging infrastructure is difficult to process as well, there are only few electric cars on the street and you cannot earn enough money with selling electricity or marketing. Only a large number of electric vehicles and new and innovative solutions can help to overcome this gap between wish and reality. (orig.)

  19. INFRASTRUCTURE

    CERN Multimedia

    A. Gaddi

    2012-01-01

    The CMS Infrastructures teams are constantly ensuring the smooth operation of the different services during this critical period when the detector is taking data at full speed. A single failure would spoil hours of high luminosity beam and everything is put in place to avoid such an eventuality. In the meantime however, the fast approaching LS1 requires that we take a look at the various activities to take place from the end of the year onwards. The list of infrastructures consolidation and upgrade tasks is already long and will touch all the services (cooling, gas, inertion, powering, etc.). The definitive list will be available just before the LS1 start. One activity performed by the CMS cooling team that is worth mentioning is the maintenance of the cooling circuits at the CMS Electronics Integration Centre (EIC) at building 904. The old chiller has been replaced by a three-units cooling plant that also serves the HVAC system for the new CSC and RPC factories. The commissioning of this new plant has tak...

  20. INFRASTRUCTURE

    CERN Multimedia

    Andrea Gaddi

    2010-01-01

    In addition to the intense campaign of replacement of the leaky bushing on the Endcap circuits, other important activities have also been completed, with the aim of enhancing the overall reliability of the cooling infrastructures at CMS. Remaining with the Endcap circuit, the regulating valve that supplies cold water to the primary side of the circuit heat-exchanger, is not well adapted in flow capability and a new part has been ordered, to be installed during a stop of LHC. The instrumentation monitoring of the refilling rate of the circuits has been enhanced and we can now detect leaks as small as 0.5 cc/sec, on circuits that have nominal flow rates of some 20 litres/sec. Another activity starting now that the technical stop is over is the collection of spare parts that are difficult to find on the market. These will be stored at P5 with the aim of reducing down-time in case of component failure. Concerning the ventilation infrastructures, it has been noticed that in winter time the relative humidity leve...

  1. Status report of the SRT radiotelescope control software: the DISCOS project

    Science.gov (United States)

    Orlati, A.; Bartolini, M.; Buttu, M.; Fara, A.; Migoni, C.; Poppi, S.; Righini, S.

    2016-08-01

    The Sardinia Radio Telescope (SRT) is a 64-m fully-steerable radio telescope. It is provided with an active surface to correct for gravitational deformations, allowing observations from 300 MHz to 100 GHz. At present, three receivers are available: a coaxial LP-band receiver (305-410 MHz and 1.5-1.8 GHz), a C-band receiver (5.7-7.7 GHz) and a 7-feed K-band receiver (18-26.5 GHz). Several back-ends are also available in order to perform the different data acquisition and analysis procedures requested by scientific projects. The design and development of the SRT control software started in 2004, and now belongs to a wider project called DISCOS (Development of the Italian Single-dish COntrol System), which provides a common infrastructure to the three Italian radio telescopes (Medicina, Noto and SRT dishes). DISCOS is based on the Alma Common Software (ACS) framework, and currently consists of more than 500k lines of code. It is organized in a common core and three specific product lines, one for each telescope. Recent developments, carried out after the conclusion of the technical commissioning of the instrument (October 2013), consisted in the addition of several new features in many parts of the observing pipeline, spanning from the motion control to the digital back-ends for data acquisition and data formatting; we brie y describe such improvements. More importantly, in the last two years we have supported the astronomical validation of the SRT radio telescope, leading to the opening of the first public call for proposals in late 2015. During this period, while assisting both the engineering and the scientific staff, we massively employed the control software and were able to test all of its features: in this process we received our first feedback from the users and we could verify how the system performed in a real-life scenario, drawing the first conclusions about the overall system stability and performance. We examine how the system behaves in terms of network

  2. Enhancing the Earth System Grid Authentication Infrastructure through Single Sign-On and Autoprovisioning

    Energy Technology Data Exchange (ETDEWEB)

    Siebenlist, Frank [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2009-01-01

    Climate scientists face an overarching need to efficiently access and manipulate climate model data. Increasingly, researchers must assemble and analyze large datasets that are archived in different formats on disparate platforms and must extract portions of datasets to compute statistical or diagnostic metrics in place. The need for a common virtual environment in which to access both climate model datasets and analysis tools is therefore keenly felt. The software infrastructure to support such an environment must not only provide ready access to climate data but must also facilitate the use of visualization software, diagnostic algorithms, and related resources. To this end, the Earth System Grid Center for Enabling Technologies (ESG-CET) was established in 2006 by the Scientific Discovery through Advanced Computing program of the U.S. Department of Energy through the Office of Advanced Scientific Computing Research and the Office Biological and Environmental Research within the Office of Science. ESG-CET is working to advance climate science by developing computational resources for accessing and managing model data that are physically located in distributed multiplatform archives. In this paper, we discuss recent development and implementation efforts by the Earth System Grid (ESG) concerning its security infrastructure. ESG's requirements are to make user logon as easy as possible and to facilitate the integration of security services and Grid components for both developers and system administrators. To meet that goal, we leverage existing primary authentication mechanisms, deploy a 'lightweight' but secure OpenID WebSSO, deploy a 'lightweight' X.509-PKI, and use autoprovisioning to ease the burden of security configuration management. We are close to completing the associated development and deployment.

  3. Broadband for all closing the infrastructure gap

    CSIR Research Space (South Africa)

    Roux, K

    2015-10-01

    Full Text Available than just addressing the infrastructure issue. The CSIR is mapping the country’s broadband infrastructure to understand where the largest gaps are, is developing models for how those gaps in broadband infrastructure can be closed. In this presentation...

  4. An Open Computing Infrastructure that Facilitates Integrated Product and Process Development from a Decision-Based Perspective

    Science.gov (United States)

    Hale, Mark A.

    1996-01-01

    Computer applications for design have evolved rapidly over the past several decades, and significant payoffs are being achieved by organizations through reductions in design cycle times. These applications are overwhelmed by the requirements imposed during complex, open engineering systems design. Organizations are faced with a number of different methodologies, numerous legacy disciplinary tools, and a very large amount of data. Yet they are also faced with few interdisciplinary tools for design collaboration or methods for achieving the revolutionary product designs required to maintain a competitive advantage in the future. These organizations are looking for a software infrastructure that integrates current corporate design practices with newer simulation and solution techniques. Such an infrastructure must be robust to changes in both corporate needs and enabling technologies. In addition, this infrastructure must be user-friendly, modular and scalable. This need is the motivation for the research described in this dissertation. The research is focused on the development of an open computing infrastructure that facilitates product and process design. In addition, this research explicitly deals with human interactions during design through a model that focuses on the role of a designer as that of decision-maker. The research perspective here is taken from that of design as a discipline with a focus on Decision-Based Design, Theory of Languages, Information Science, and Integration Technology. Given this background, a Model of IPPD is developed and implemented along the lines of a traditional experimental procedure: with the steps of establishing context, formalizing a theory, building an apparatus, conducting an experiment, reviewing results, and providing recommendations. Based on this Model, Design Processes and Specification can be explored in a structured and implementable architecture. An architecture for exploring design called DREAMS (Developing Robust

  5. Development of a lunar infrastructure

    Science.gov (United States)

    Burke, J. D.

    If humans are to reside continuously and productively on the Moon, they must be surrounded and supported there by an infrastructure having some attributes of the support systems that have made advanced civilization possible on Earth. Building this lunar infrastructure will, in a sense, be an investment. Creating it will require large resources from Earth, but once it exists it can do much to limit the further demands of a lunar base for Earthside support. What is needed for a viable lunar infrastructure? This question can be approached from two directions. The first is to examine history, which is essentially a record of growing information structures among humans on Earth (tribes, agriculture, specialization of work, education, ethics, arts and sciences, cities and states, technology). The second approach is much less secure but may provide useful insights: it is to examine the minimal needs of a small human community - not just for physical survival but for a stable existence with a net product output. This paper presents a summary, based on present knowledge of the Moon and of the likely functions of a human community there, of some of these infrastructure requirements, and also discusses possible ways to proceed toward meeting early infrastructure needs.

  6. Financing for Infrastructure Investment in G-20 Countries.

    OpenAIRE

    Sengupta, Ramprasad; Mukherjee, Sacchidananda; Gupta, Manish

    2015-01-01

    This study looks into various sources of financing infrastructure and the demands for infrastructure investments and highlights the mismatch between demand and supply of funds for infrastructure financing in India. In order to address this mismatch, and given the constraints of traditional sources of infrastructure finance in India, this paper suggests credit enhancement scheme (CES) as an alternative framework for mobilizing long-term infrastructure finance. It suggests for scaling up CES as...

  7. Reliability Analysis and Optimal Release Problem Considering Maintenance Time of Software Components for an Embedded OSS Porting Phase

    Science.gov (United States)

    Tamura, Yoshinobu; Yamada, Shigeru

    OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.

  8. A GeoServices Infrastructure for Near-Real-Time Access to Suomi NPP Satellite Data

    Science.gov (United States)

    Evans, J. D.; Valente, E. G.; Hao, W.; Chettri, S.

    2012-12-01

    The new Suomi National Polar-orbiting Partnership (NPP) satellite extends NASA's moderate-resolution, multispectral observations with a suite of powerful imagers and sounders to support a broad array of research and applications. However, NPP data products consist of a complex set of data and metadata files in highly specialized formats; which NPP's operational ground segment delivers to users only with several hours' delay. This severely limits their use in critical applications such as weather forecasting, emergency / disaster response, search and rescue, and other activities that require near-real-time access to satellite observations. Alternative approaches, based on distributed Direct Broadcast facilities, can reduce the delay in NPP data delivery from hours to minutes, and can make products more directly usable by practitioners in the field. To assess and fulfill this potential, we are developing a suite of software that couples Direct Broadcast data feeds with a streamlined, scalable processing chain and geospatial Web services, so as to permit many more time-sensitive applications to use NPP data. The resulting geoservices infrastructure links a variety of end-user tools and applications to NPP data from different sources, and to other rapidly-changing geospatial data. By using well-known, standard software interfaces (such as OGC Web Services or OPeNDAP), this infrastructure serves a variety of end-user analysis and visualization tools, giving them access into datasets of arbitrary size and resolution and allowing them to request and receive tailored products on demand. The standards-based approach may also streamline data sharing among independent satellite receiving facilities, thus helping them to interoperate in providing frequent, composite views of continent-scale or global regions. To enable others to build similar or derived systems, the service components we are developing (based in part on the Community Satellite Processing Package (CSPP) from

  9. panMetaDocs, eSciDoc, and DOIDB—An Infrastructure for the Curation and Publication of File-Based Datasets for GFZ Data Services

    Directory of Open Access Journals (Sweden)

    Damian Ulbricht

    2016-03-01

    Full Text Available The GFZ German Research Centre for Geosciences is the national laboratory for Geosciences in Germany. As part of the Helmholtz Association, providing and maintaining large-scale scientific infrastructures are an essential part of GFZ activities. This includes the generation of significant volumes and numbers of research data, which subsequently become source materials for data publications. The development and maintenance of data systems is a key component of GFZ Data Services to support state-of-the-art research. A challenge lies not only in the diversity of scientific subjects and communities, but also in different types and manifestations of how data are managed by research groups and individual scientists. The data repository of GFZ Data Services provides a flexible IT infrastructure for data storage and publication, including minting of digital object identifiers (DOI. It was built as a modular system of several independent software components linked together through Application Programming Interfaces (APIs provided by the eSciDoc framework. Principal application software are panMetaDocs for data management and DOIDB for logging and moderating data publications activities. Wherever possible, existing software solutions were integrated or adapted. A summary of our experiences made in operating this service is given. Data are described through comprehensive landing pages and supplementary documents, like journal articles or data reports, thus augmenting the scientific usability of the service.

  10. Financing and Managing Infrastructure in Africa

    OpenAIRE

    Mthuli Ncube

    2010-01-01

    This paper discusses various ways of financing infrastructure under public private partnership (PPP) arrangements in Africa. The paper presents the standard literature on the relationship between infrastructure investment and economic growth, highlighting the contradictory findings in the literature. Stylised facts about the state of infrastructure in Africa, compared with other regions such as Asia and Latin America, are also presented. Examples of how PPPs structures work are discussed incl...

  11. ReSS: Resource Selection Service for National and Campus Grid Infrastructure

    International Nuclear Information System (INIS)

    Mhashilkar, Parag; Garzoglio, Gabriele; Levshina, Tanya; Timm, Steve

    2010-01-01

    The Open Science Grid (OSG) offers access to around hundred Compute elements (CE) and storage elements (SE) via standard Grid interfaces. The Resource Selection Service (ReSS) is a push-based workload management system that is integrated with the OSG information systems and resources. ReSS integrates standard Grid tools such as Condor, as a brokering service and the gLite CEMon, for gathering and publishing resource information in GLUE Schema format. ReSS is used in OSG by Virtual Organizations (VO) such as Dark Energy Survey (DES), DZero and Engagement VO. ReSS is also used as a Resource Selection Service for Campus Grids, such as FermiGrid. VOs use ReSS to automate the resource selection in their workload management system to run jobs over the grid. In the past year, the system has been enhanced to enable publication and selection of storage resources and of any special software or software libraries (like MPI libraries) installed at computing resources. In this paper, we discuss the Resource Selection Service, its typical usage on the two scales of a National Cyber Infrastructure Grid, such as OSG, and of a campus Grid, such as FermiGrid.

  12. ReSS: Resource Selection Service for National and Campus Grid Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Mhashilkar, Parag; Garzoglio, Gabriele; Levshina, Tanya; Timm, Steve, E-mail: parag@fnal.go, E-mail: garzogli@fnal.go, E-mail: tlevshin@fnal.go, E-mail: timm@fnal.go [Fermi National Accelerator Laboratory, P O Box 500, Batavia, IL - 60510 (United States)

    2010-04-01

    The Open Science Grid (OSG) offers access to around hundred Compute elements (CE) and storage elements (SE) via standard Grid interfaces. The Resource Selection Service (ReSS) is a push-based workload management system that is integrated with the OSG information systems and resources. ReSS integrates standard Grid tools such as Condor, as a brokering service and the gLite CEMon, for gathering and publishing resource information in GLUE Schema format. ReSS is used in OSG by Virtual Organizations (VO) such as Dark Energy Survey (DES), DZero and Engagement VO. ReSS is also used as a Resource Selection Service for Campus Grids, such as FermiGrid. VOs use ReSS to automate the resource selection in their workload management system to run jobs over the grid. In the past year, the system has been enhanced to enable publication and selection of storage resources and of any special software or software libraries (like MPI libraries) installed at computing resources. In this paper, we discuss the Resource Selection Service, its typical usage on the two scales of a National Cyber Infrastructure Grid, such as OSG, and of a campus Grid, such as FermiGrid.

  13. ReSS: Resource Selection Service for National and Campus Grid Infrastructure

    International Nuclear Information System (INIS)

    Mhashilkar, Parag; Garzoglio, Gabriele; Levshina, Tanya; Timm, Steve

    2009-01-01

    The Open Science Grid (OSG) offers access to around hundred Compute elements (CE) and storage elements (SE) via standard Grid interfaces. The Resource Selection Service (ReSS) is a push-based workload management system that is integrated with the OSG information systems and resources. ReSS integrates standard Grid tools such as Condor, as a brokering service and the gLite CEMon, for gathering and publishing resource information in GLUE Schema format. ReSS is used in OSG by Virtual Organizations (VO) such as Dark Energy Survey (DES), DZero and Engagement VO. ReSS is also used as a Resource Selection Service for Campus Grids, such as FermiGrid. VOs use ReSS to automate the resource selection in their workload management system to run jobs over the grid. In the past year, the system has been enhanced to enable publication and selection of storage resources and of any special software or software libraries (like MPI libraries) installed at computing resources. In this paper, we discuss the Resource Selection Service, its typical usage on the two scales of a National Cyber Infrastructure Grid, such as OSG, and of a campus Grid, such as FermiGrid.

  14. Building analytical platform with Big Data solutions for log files of PanDA infrastructure

    Science.gov (United States)

    Alekseev, A. A.; Barreiro Megino, F. G.; Klimentov, A. A.; Korchuganova, T. A.; Maendo, T.; Padolski, S. V.

    2018-05-01

    The paper describes the implementation of a high-performance system for the processing and analysis of log files for the PanDA infrastructure of the ATLAS experiment at the Large Hadron Collider (LHC), responsible for the workload management of order of 2M daily jobs across the Worldwide LHC Computing Grid. The solution is based on the ELK technology stack, which includes several components: Filebeat, Logstash, ElasticSearch (ES), and Kibana. Filebeat is used to collect data from logs. Logstash processes data and export to Elasticsearch. ES are responsible for centralized data storage. Accumulated data in ES can be viewed using a special software Kibana. These components were integrated with the PanDA infrastructure and replaced previous log processing systems for increased scalability and usability. The authors will describe all the components and their configuration tuning for the current tasks, the scale of the actual system and give several real-life examples of how this centralized log processing and storage service is used to showcase the advantages for daily operations.

  15. New software system to improve AGU membership management

    Science.gov (United States)

    McEntee, Chris

    2012-06-01

    Almost 2 years ago, AGU began investigating how it could more efficiently manage member and customer records as well as support processes that currently run on multiple systems. I am pleased to announce that on 25 June, as the result of intense efforts, AGU will migrate to a new database software system that will house the majority of AGU operations. AGU staff will have more tools at their disposal to assist members, and members will have more intuitive and user-friendly options when using the online interface to update their profiles or make purchases. I am particularly excited about this major improvement to our infrastructure because it better positions AGU to achieve goals in its strategic plan.

  16. Infrastructure monitoring with spaceborne SAR sensors

    CERN Document Server

    ANGHEL, ANDREI; CACOVEANU, REMUS

    2017-01-01

    This book presents a novel non-intrusive infrastructure monitoring technique based on the detection and tracking of scattering centers in spaceborne SAR images. The methodology essentially consists of refocusing each available SAR image on an imposed 3D point cloud associated to the envisaged infrastructure element and identifying the reliable scatterers to be monitored by means of four dimensional (4D) tomography. The methodology described in this book provides a new perspective on infrastructure monitoring with spaceborne SAR images, is based on a standalone processing chain, and brings innovative technical aspects relative to conventional approaches. The book is intended primarily for professionals and researchers working in the area of critical infrastructure monitoring by radar remote sensing.

  17. Nuclear power infrastructure and planning

    International Nuclear Information System (INIS)

    2005-01-01

    There are several stages in the process of introducing nuclear power in a country. These include feasibility studies; technology evaluation; request for proposals and proposal evaluation; project and contracts development and financing; supply, construction, and commissioning; and finally operation. The IAEA is developing guidance directed to provide criteria for assessing the minimum infrastructure necessary for: a) a host country to consider when engaging in the implementation of nuclear power, or b) a supplier country to consider when assessing that the recipient country would be in an acceptable condition to begin the implementation of nuclear power. There are Member States that may be denied the benefits of nuclear energy if the infrastructure requirements are too large or onerous for the national economy. However if co-operation could be achieved, the infrastructure burden could be shared and economic benefits gained by several countries acting jointly. The IAEA is developing guidance on the potential for sharing of nuclear power infrastructure among countries adopting or extending nuclear power programme

  18. Computing infrastructure for ATLAS data analysis in the Italian Grid cloud

    International Nuclear Information System (INIS)

    Andreazza, A; Annovi, A; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Campana, S; Girolamo, A Di; Carlino, G; Doria, A; Merola, L; Musto, E; Ciocca, C; Jha, M K; Cobal, M; Pascolo, F; Salvo, A De; Luminari, L; Sanctis, U De; Galeazzi, F

    2011-01-01

    ATLAS data are distributed centrally to Tier-1 and Tier-2 sites. The first stages of data selection and analysis take place mainly at Tier-2 centres, with the final, iterative and interactive, stages taking place mostly at Tier-3 clusters. The Italian ATLAS cloud consists of a Tier-1, four Tier-2s, and Tier-3 sites at each institute. Tier-3s that are grid-enabled are used to test code that will then be run on a larger scale at Tier-2s. All Tier-3s offer interactive data access to their users and the possibility to run PROOF. This paper describes the hardware and software infrastructure choices taken, the operational experience after 10 months of LHC data, and discusses site performances.

  19. Infrastructures and Life-Cycle Cost-Benefit Analysis

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    2012-01-01

    Design and maintenance of infrastructures using Life-Cycle Cost-Benefit analysis is discussed in this paper with special emphasis on users costs. This is for several infrastructures such as bridges, highways etc. of great importance. Repair or/and failure of infrastructures will usually result...

  20. Grid Computing Making the Global Infrastructure a Reality

    CERN Document Server

    Fox, Geoffrey C; Hey, Anthony J G

    2003-01-01

    Grid computing is applying the resources of many computers in a network to a single problem at the same time Grid computing appears to be a promising trend for three reasons: (1) Its ability to make more cost-effective use of a given amount of computer resources, (2) As a way to solve problems that can't be approached without an enormous amount of computing power (3) Because it suggests that the resources of many computers can be cooperatively and perhaps synergistically harnessed and managed as a collaboration toward a common objective. A number of corporations, professional groups, university consortiums, and other groups have developed or are developing frameworks and software for managing grid computing projects. The European Community (EU) is sponsoring a project for a grid for high-energy physics, earth observation, and biology applications. In the United States, the National Technology Grid is prototyping a computational grid for infrastructure and an access grid for people. Sun Microsystems offers Gri...

  1. Prioritizing Infrastructure Investments in Panama : Pilot Application of the World Bank Infrastructure Prioritization Framework

    OpenAIRE

    Marcelo, Darwin; Mandri-Perrott, Cledan; House, Schuyler

    2016-01-01

    Infrastructure services are significant determinants of economic development, social welfare, trade, and public health. As such, they typically feature strongly in national development plans. While governments may receive many infrastructure project proposals, however, resources are often insufficient to finance the full set of proposals in the short term. Leading up to 2020, an estimated US$836 ...

  2. DASISH Reference Model for SSH Data Infrastructures

    NARCIS (Netherlands)

    Fihn, Johan; Gnadt, Timo; Hoogerwerf, M.L.; Jerlehag, Birger; Lenkiewicz, Przemek; Priddy, M.; Shepherdson, John

    2016-01-01

    The current ”rising tide of scientific data” accelerates the need for e-infrastructures to support the lifecycle of data in research, from creation to reuse [RTW]. Different types of e-infrastructures address this need. Consortia like GÉANT and EGI build technical infrastructures for networking and

  3. Infrastructure and Agricultural Growth in Nigeria | Ighodaro ...

    African Journals Online (AJOL)

    The provision of infrastructure in Nigeria, particularly physical infrastructure is characterized by the predominance of public enterprises except for telecommunications sector in recent time. The empirical part of the study revealed different relative response rates of the different component of infrastructure used in the study to ...

  4. IFC and infrastructure - investing in power

    International Nuclear Information System (INIS)

    Chaudhry, Vijay

    1992-01-01

    Adequate infrastructure is essential to a country's growth. It provides a foundation which enables the economy to function. Until recently, most governments provided the physical infrastructure of industry: transport, communications, and power systems. Today, the trend is for governments to regulate monopolies while taking maximum advantage of private sector investment, decision-making and management. The private sector is increasingly being recognized as having the capacity to operate infrastructure projects more efficiently. (author)

  5. Urban Green Infrastructure: German Experience

    Directory of Open Access Journals (Sweden)

    Diana Olegovna Dushkova

    2016-06-01

    Full Text Available The paper presents a concept of urban green infrastructure and analyzes the features of its implementation in the urban development programmes of German cities. We analyzed the most shared articles devoted to the urban green infrastructure to see different approaches to definition of this term. It is based on materials of field research in the cities of Berlin and Leipzig in 2014-2015, international and national scientific publications. During the process of preparing the paper, consultations have been held with experts from scientific institutions and Administrations of Berlin and Leipzig as well as local experts from environmental organizations of both cities. Using the German cities of Berlin and Leipzig as examples, this paper identifies how the concept can be implemented in the program of urban development. It presents the main elements of green city model, which include mitigation of negative anthropogenic impact on the environment under the framework of urban sustainable development. Essential part of it is a complex ecological policy as a major necessary tool for the implementation of the green urban infrastructure concept. This ecological policy should embody not only some ecological measurements, but also a greening of all urban infrastructure elements as well as implementation of sustainable living with a greater awareness of the resources, which are used in everyday life, and development of environmental thinking among urban citizens. Urban green infrastructure is a unity of four main components: green building, green transportation, eco-friendly waste management, green transport routes and ecological corridors. Experience in the development of urban green infrastructure in Germany can be useful to improve the environmental situation in Russian cities.

  6. Global Software Engineering: A Software Process Approach

    Science.gov (United States)

    Richardson, Ita; Casey, Valentine; Burton, John; McCaffery, Fergal

    Our research has shown that many companies are struggling with the successful implementation of global software engineering, due to temporal, cultural and geographical distance, which causes a range of factors to come into play. For example, cultural, project managementproject management and communication difficulties continually cause problems for software engineers and project managers. While the implementation of efficient software processes can be used to improve the quality of the software product, published software process models do not cater explicitly for the recent growth in global software engineering. Our thesis is that global software engineering factors should be included in software process models to ensure their continued usefulness in global organisations. Based on extensive global software engineering research, we have developed a software process, Global Teaming, which includes specific practices and sub-practices. The purpose is to ensure that requirements for successful global software engineering are stipulated so that organisations can ensure successful implementation of global software engineering.

  7. COOPEUS - connecting research infrastructures in environmental sciences

    Science.gov (United States)

    Koop-Jakobsen, Ketil; Waldmann, Christoph; Huber, Robert

    2015-04-01

    The COOPEUS project was initiated in 2012 bringing together 10 research infrastructures (RIs) in environmental sciences from the EU and US in order to improve the discovery, access, and use of environmental information and data across scientific disciplines and across geographical borders. The COOPEUS mission is to facilitate readily accessible research infrastructure data to advance our understanding of Earth systems through an international community-driven effort, by: Bringing together both user communities and top-down directives to address evolving societal and scientific needs; Removing technical, scientific, cultural and geopolitical barriers for data use; and Coordinating the flow, integrity and preservation of information. A survey of data availability was conducted among the COOPEUS research infrastructures for the purpose of discovering impediments for open international and cross-disciplinary sharing of environmental data. The survey showed that the majority of data offered by the COOPEUS research infrastructures is available via the internet (>90%), but the accessibility to these data differ significantly among research infrastructures; only 45% offer open access on their data, whereas the remaining infrastructures offer restricted access e.g. do not release raw data or sensible data, demand user registration or require permission prior to release of data. These rules and regulations are often installed as a form of standard practice, whereas formal data policies are lacking in 40% of the infrastructures, primarily in the EU. In order to improve this situation COOPEUS has installed a common data-sharing policy, which is agreed upon by all the COOPEUS research infrastructures. To investigate the existing opportunities for improving interoperability among environmental research infrastructures, COOPEUS explored the opportunities with the GEOSS common infrastructure (GCI) by holding a hands-on workshop. Through exercises directly registering resources

  8. Servicing HEP experiments with a complete set of ready integreated and configured common software components

    International Nuclear Information System (INIS)

    Roiser, Stefan; Gaspar, Ana; Perrin, Yves; Kruzelecki, Karol

    2010-01-01

    The LCG Applications Area at CERN provides basic software components for the LHC experiments such as ROOT, POOL, COOL which are developed in house and also a set of 'external' software packages (70) which are needed in addition such as Python, Boost, Qt, CLHEP, etc. These packages target many different areas of HEP computing such as data persistency, math, simulation, grid computing, databases, graphics, etc. Other packages provide tools for documentation, debugging, scripting languages and compilers. All these packages are provided in a consistent manner on different compilers, architectures and operating systems. The Software Process and Infrastructure project (SPI) [1] is responsible for the continous testing, coordination, release and deployment of these software packages. The main driving force for the actions carried out by SPI are the needs of the LHC experiments, but also other HEP experiments could profit from the set of consistent libraries provided and receive a stable and well tested foundation to build their experiment software frameworks. This presentation will first provide a brief description of the tools and services provided for the coordination, testing, release, deployment and presentation of LCG/AA software packages and then focus on a second set of tools provided for outside LHC experiments to deploy a stable set of HEP related software packages both as binary distribution or from source.

  9. Servicing HEP experiments with a complete set of ready integreated and configured common software components

    Energy Technology Data Exchange (ETDEWEB)

    Roiser, Stefan; Gaspar, Ana; Perrin, Yves [CERN, CH-1211 Geneva 23, PH Department, SFT Group (Switzerland); Kruzelecki, Karol, E-mail: stefan.roiser@cern.c, E-mail: ana.gaspar@cern.c, E-mail: yves.perrin@cern.c, E-mail: karol.kruzelecki@cern.c [CERN, CH-1211 Geneva 23, PH Department, LBC Group (Switzerland)

    2010-04-01

    The LCG Applications Area at CERN provides basic software components for the LHC experiments such as ROOT, POOL, COOL which are developed in house and also a set of 'external' software packages (70) which are needed in addition such as Python, Boost, Qt, CLHEP, etc. These packages target many different areas of HEP computing such as data persistency, math, simulation, grid computing, databases, graphics, etc. Other packages provide tools for documentation, debugging, scripting languages and compilers. All these packages are provided in a consistent manner on different compilers, architectures and operating systems. The Software Process and Infrastructure project (SPI) [1] is responsible for the continous testing, coordination, release and deployment of these software packages. The main driving force for the actions carried out by SPI are the needs of the LHC experiments, but also other HEP experiments could profit from the set of consistent libraries provided and receive a stable and well tested foundation to build their experiment software frameworks. This presentation will first provide a brief description of the tools and services provided for the coordination, testing, release, deployment and presentation of LCG/AA software packages and then focus on a second set of tools provided for outside LHC experiments to deploy a stable set of HEP related software packages both as binary distribution or from source.

  10. Reproducibility in Research: Systems, Infrastructure, Culture

    Directory of Open Access Journals (Sweden)

    Tom Crick

    2017-11-01

    Full Text Available The reproduction and replication of research results has become a major issue for a number of scientific disciplines. In computer science and related computational disciplines such as systems biology, the challenges closely revolve around the ability to implement (and exploit novel algorithms and models. Taking a new approach from the literature and applying it to a new codebase frequently requires local knowledge missing from the published manuscripts and transient project websites. Alongside this issue, benchmarking, and the lack of open, transparent and fair benchmark sets present another barrier to the verification and validation of claimed results. In this paper, we outline several recommendations to address these issues, driven by specific examples from a range of scientific domains. Based on these recommendations, we propose a high-level prototype open automated platform for scientific software development which effectively abstracts specific dependencies from the individual researcher and their workstation, allowing easy sharing and reproduction of results. This new e-infrastructure for reproducible computational science offers the potential to incentivise a culture change and drive the adoption of new techniques to improve the quality and efficiency – and thus reproducibility – of scientific exploration.

  11. Protection of Mobile Agents Execution Using a Modified Self-Validating Branch-Based Software Watermarking with External Sentinel

    Science.gov (United States)

    Tomàs-Buliart, Joan; Fernández, Marcel; Soriano, Miguel

    Critical infrastructures are usually controlled by software entities. To monitor the well-function of these entities, a solution based in the use of mobile agents is proposed. Some proposals to detect modifications of mobile agents, as digital signature of code, exist but they are oriented to protect software against modification or to verify that an agent have been executed correctly. The aim of our proposal is to guarantee that the software is being executed correctly by a non trusted host. The way proposed to achieve this objective is by the improvement of the Self-Validating Branch-Based Software Watermarking by Myles et al.. The proposed modification is the incorporation of an external element called sentinel which controls branch targets. This technique applied in mobile agents can guarantee the correct operation of an agent or, at least, can detect suspicious behaviours of a malicious host during the execution of the agent instead of detecting when the execution of the agent have finished.

  12. Data Updating Methods for Spatial Data Infrastructure that Maintain Infrastructure Quality and Enable its Sustainable Operation

    Science.gov (United States)

    Murakami, S.; Takemoto, T.; Ito, Y.

    2012-07-01

    The Japanese government, local governments and businesses are working closely together to establish spatial data infrastructures in accordance with the Basic Act on the Advancement of Utilizing Geospatial Information (NSDI Act established in August 2007). Spatial data infrastructures are urgently required not only to accelerate computerization of the public administration, but also to help restoration and reconstruction of the areas struck by the East Japan Great Earthquake and future disaster prevention and reduction. For construction of a spatial data infrastructure, various guidelines have been formulated. But after an infrastructure is constructed, there is a problem of maintaining it. In one case, an organization updates its spatial data only once every several years because of budget problems. Departments and sections update the data on their own without careful consideration. That upsets the quality control of the entire data system and the system loses integrity, which is crucial to a spatial data infrastructure. To ensure quality, ideally, it is desirable to update data of the entire area every year. But, that is virtually impossible, considering the recent budget crunch. The method we suggest is to update spatial data items of higher importance only in order to maintain quality, not updating all the items across the board. We have explored a method of partially updating the data of these two geographical features while ensuring the accuracy of locations. Using this method, data on roads and buildings that greatly change with time can be updated almost in real time or at least within a year. The method will help increase the availability of a spatial data infrastructure. We have conducted an experiment on the spatial data infrastructure of a municipality using those data. As a result, we have found that it is possible to update data of both features almost in real time.

  13. Infrastructure Commons in Economic Perspective

    Science.gov (United States)

    Frischmann, Brett M.

    This chapter briefly summarizes a theory (developed in substantial detail elsewhere)1 that explains why there are strong economic arguments for managing and sustaining infrastructure resources in an openly accessible manner. This theory facilitates a better understanding of two related issues: how society benefits from infrastructure resources and how decisions about how to manage or govern infrastructure resources affect a wide variety of public and private interests. The key insights from this analysis are that infrastructure resources generate value as inputs into a wide range of productive processes and that the outputs from these processes are often public goods and nonmarket goods that generate positive externalities that benefit society as a whole. Managing such resources in an openly accessible manner may be socially desirable from an economic perspective because doing so facilitates these downstream productive activities. For example, managing the Internet infrastructure in an openly accessible manner facilitates active citizen involvement in the production and sharing of many different public and nonmarket goods. Over the last decade, this has led to increased opportunities for a wide range of citizens to engage in entrepreneurship, political discourse, social network formation, and community building, among many other activities. The chapter applies these insights to the network neutrality debate and suggests how the debate might be reframed to better account for the wide range of private and public interests at stake.

  14. Enabling Research without Geographical Boundaries via Collaborative Research Infrastructures

    Science.gov (United States)

    Gesing, S.

    2016-12-01

    Collaborative research infrastructures on global scale for earth and space sciences face a plethora of challenges from technical implementations to organizational aspects. Science gateways - also known as virtual research environments (VREs) or virtual laboratories - address part of such challenges by providing end-to-end solutions to aid researchers to focus on their specific research questions without the need to become acquainted with the technical details of the complex underlying infrastructures. In general, they provide a single point of entry to tools and data irrespective of organizational boundaries and thus make scientific discoveries easier and faster. The importance of science gateways has been recognized on national as well as on international level by funding bodies and by organizations. For example, the US NSF has just funded a Science Gateways Community Institute, which offers support, consultancy and open accessible software repositories for users and developers; Horizon 2020 provides funding for virtual research environments in Europe, which has led to projects such as VRE4EIC (A Europe-wide Interoperable Virtual Research Environment to Empower Multidisciplinary Research Communities and Accelerate Innovation and Collaboration); national or continental research infrastructures such as XSEDE in the USA, Nectar in Australia or EGI in Europe support the development and uptake of science gateways; the global initiatives International Coalition on Science Gateways, the RDA Virtual Research Environment Interest Group as well as the IEEE Technical Area on Science Gateways have been founded to provide global leadership on future directions for science gateways in general and facilitate awareness for science gateways. This presentation will give an overview on these projects and initiatives aiming at supporting domain researchers and developers with measures for the efficient creation of science gateways, for increasing their usability and sustainability

  15. Infrastructure development for ASEAN economic integration

    OpenAIRE

    Bhattacharyay, Biswa Nath

    2009-01-01

    With a population of 600 million, ASEAN is considered to be one of the most diverse regions in the world. It is also one of the world's fastest growing regions. ASEAN's aim is to evolve into an integrated economic community by 2015. Crucial to achieving this ambitious target is cooperation in infrastructure development for physical connectivity, particularly in cross-border infrastructure. This paper provides an overview of the quantity and quality of existing infrastructure in ASEAN member c...

  16. Critical Infrastructure Protection: Maintenance is National Security

    Directory of Open Access Journals (Sweden)

    Kris Hemme

    2015-10-01

    Full Text Available U.S. critical infrastructure protection (CIP necessitates both the provision of security from internal and external threats and the repair of physically damaged critical infrastructure which may disrupt services. For years, the U.S. infrastructure has been deteriorating, triggering enough damage and loss of life to give cause for major concern. CIP is typically only addressed after a major disaster or catastrophe due to the extreme scrutiny that follows these events. In fact, CIP has been addressed repeatedly since Presidential Decision Directive Sixty-Three (PDD Sixty-Three signed by President Bill Clinton on May Twenty-Second, 1998.[1] This directive highlighted critical infrastructure as “a growing potential vulnerability” and recognized that the United States has to view the U.S. national infrastructure from a security perspective due to its importance to national and economic security. CIP must be addressed in a preventive, rather than reactive, manner.[2] As such, there are sixteen critical infrastructure sectors, each with its own protection plan and unique natural and man-made threats, deteriorations, and risks. A disaster or attack on any one of these critical infrastructures could cause serious damage to national security and possibly lead to the collapse of the entire infrastructure. [1] The White House, Presidential Decision Directive/NSC–63 (Washington D.C.: The White House, May 22, 1998: 1–18, available at: http://www.epa.gov/watersecurity/tools/trainingcd/Guidance/pdd-63.pdf. [2] Ibid, 1.

  17. A Framework for Discussing e-Research Infrastructure Sustainability

    Directory of Open Access Journals (Sweden)

    Daniel S Katz

    2014-07-01

    Full Text Available e-Research infrastructure is increasingly important in the conduct of science and engineering research, and in many disciplines has become an essential part of the research infrastructure. However, this e-Research infrastructure does not appear from a vacuum; it needs both intent and effort first to be created and then to be sustained over time. Research cultures and practices in many disciplines have not adapted to this new paradigm, due in part to the absence of a deep understanding of the elements of e-Research infrastructure and the characteristics that influence their sustainability. This paper outlines a set of contexts in which e-Research infrastructure can be discussed, proposes characteristics that must be considered to sustain infrastructure elements, and highlights models that may be used to create and sustain e-Research infrastructure. We invite feedback on the proposed characteristics and models presented herein.

  18. OSiRIS: a distributed Ceph deployment using software defined networking for multi-institutional research

    Science.gov (United States)

    McKee, Shawn; Kissel, Ezra; Meekhof, Benjeman; Swany, Martin; Miller, Charles; Gregorowicz, Michael

    2017-10-01

    We report on the first year of the OSiRIS project (NSF Award #1541335, UM, IU, MSU and WSU) which is targeting the creation of a distributed Ceph storage infrastructure coupled together with software-defined networking to provide high-performance access for well-connected locations on any participating campus. The projects goal is to provide a single scalable, distributed storage infrastructure that allows researchers at each campus to read, write, manage and share data directly from their own computing locations. The NSF CC*DNI DIBBS program which funded OSiRIS is seeking solutions to the challenges of multi-institutional collaborations involving large amounts of data and we are exploring the creative use of Ceph and networking to address those challenges. While OSiRIS will eventually be serving a broad range of science domains, its first adopter will be the LHC ATLAS detector project via the ATLAS Great Lakes Tier-2 (AGLT2) jointly located at the University of Michigan and Michigan State University. Part of our presentation will cover how ATLAS is using the OSiRIS infrastructure and our experiences integrating our first user community. The presentation will also review the motivations for and goals of the project, the technical details of the OSiRIS infrastructure, the challenges in providing such an infrastructure, and the technical choices made to address those challenges. We will conclude with our plans for the remaining 4 years of the project and our vision for what we hope to deliver by the projects end.

  19. Cyberwarfare on the Electricity Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Murarka, N.; Ramesh, V.C.

    2000-03-20

    The report analyzes the possibility of cyberwarfare on the electricity infrastructure. The ongoing deregulation of the electricity industry makes the power grid all the more vulnerable to cyber attacks. The report models the power system information system components, models potential threats and protective measures. It therefore offers a framework for infrastructure protection.

  20. Algorithms and Architectures for Elastic-Wave Inversion Final Report CRADA No. TC02144.0

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lindtjorn, O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-15

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and Schlumberger Technology Corporation (STC), to perform a computational feasibility study that investigates hardware platforms and software algorithms applicable to STC for Reverse Time Migration (RTM) / Reverse Time Inversion (RTI) of 3-D seismic data.

  1. iSDS: a self-configurable software-defined storage system for enterprise

    Science.gov (United States)

    Chen, Wen-Shyen Eric; Huang, Chun-Fang; Huang, Ming-Jen

    2018-01-01

    Storage is one of the most important aspects of IT infrastructure for various enterprises. But, enterprises are interested in more than just data storage; they are interested in such things as more reliable data protection, higher performance and reduced resource consumption. Traditional enterprise-grade storage satisfies these requirements at high cost. It is because traditional enterprise-grade storage is usually designed and constructed by customised field-programmable gate array to achieve high-end functionality. However, in this ever-changing environment, enterprises request storage with more flexible deployment and at lower cost. Moreover, the rise of new application fields, such as social media, big data, video streaming service etc., makes operational tasks for administrators more complex. In this article, a new storage system called intelligent software-defined storage (iSDS), based on software-defined storage, is described. More specifically, this approach advocates using software to replace features provided by traditional customised chips. To alleviate the management burden, it also advocates applying machine learning to automatically configure storage to meet dynamic requirements of workloads running on storage. This article focuses on the analysis feature of iSDS cluster by detailing its architecture and design.

  2. Internationalization of infrastructure companies

    Directory of Open Access Journals (Sweden)

    Frederico Araujo Turolla

    2009-03-01

    Full Text Available The decision of infrastructure firms to go international is not a simple one. Differently from firms from most of the sectors, investment requires large amounts of capital, there are significant transaction costs and also involves issues that are specific to the destiny country. In spite of the risks, several infrastructure groups have been investing abroad and have widened the foreign part in the share of the receipts. The study herein proposed is a refinement of the established theory of international business, with support from the industrial organization theory, namely on infrastructure economics. The methodology is theoretical empirical since it starts from two existing theories. Hypotheses relate the degree of internationalization (GI to a set of determinants of internationalization. As of conclusions, with the exception of the economies of density and scale, which did not show as relevant, all other variables behaved as expected.

  3. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  4. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  5. The TSO Logic and G2 Software Product

    Science.gov (United States)

    Davis, Derrick D.

    2014-01-01

    This internship assignment for spring 2014 was at John F. Kennedy Space Center (KSC), in NASAs Engineering and Technology (NE) group in support of the Control and Data Systems Division (NE-C) within the Systems Hardware Engineering Branch. (NEC-4) The primary focus was in system integration and benchmarking utilizing two separate computer software products. The first half of this 2014 internship is spent in assisting NE-C4s Electronics and Embedded Systems Engineer, Kelvin Ruiz and fellow intern Scott Ditto with the evaluation of a newly piece of software, called G2. Its developed by the Gensym Corporation and introduced to the group as a tool used in monitoring launch environments. All fellow interns and employees of the G2 group have been working together in order to better understand the significance of the G2 application and how KSC can benefit from its capabilities. The second stage of this Spring project is to assist with an ongoing integration of a benchmarking tool, developed by a group of engineers from a Canadian based organization known as TSO Logic. Guided by NE-C4s Computer Engineer, Allen Villorin, NASA 2014 interns put forth great effort in helping to integrate TSOs software into the Spaceport Processing Systems Development Laboratory (SPSDL) for further testing and evaluating. The TSO Logic group claims that their software is designed for, monitoring and reducing energy consumption at in-house server farms and large data centers, allows data centers to control the power state of servers, without impacting availability or performance and without changes to infrastructure and the focus of the assignment is to test this theory. TSOs Aaron Rallo Founder and CEO, and Chris Tivel CTO, both came to KSC to assist with the installation of their software in the SPSDL laboratory. TSOs software is installed onto 24 individual workstations running three different operating systems. The workstations were divided into three groups of 8 with each group having its

  6. A sociotechnical framework for understanding infrastructure breakdown and repair

    Energy Technology Data Exchange (ETDEWEB)

    Sims, Benjamin H [Los Alamos National Laboratory

    2009-01-01

    This paper looks at how and why infrastructure is repaired. With a new era of infrastructure spending underway, policymakers need to understand and anticipate the particular technical and political challenges posed by infrastructure repair. In particular, as infrastructure problems are increasingly in the public eye with current economic stimulus efforts, the question has increasingly been asked: why has it been so difficult for the United Statesto devote sustained resources to maintaining and upgrading its national infrastructure? This paper provides a sociotechnical framework for understanding the challenges of infrastructure repair, and demonstrates this framework using a case study of seismic retrofit of freeway bridges in California. The design of infrastructure is quite different from other types of design work even when new infrastructure is being designed. Infrastructure projects are almost always situated within, and must work with, existing infrastructure networks. As a result, compared to design of more discrete technological artifacts, the design of infrastructure systems requires a great deal of attention to interfaces as well as adaptation of design to the constraints imposed by existing systems. Also, because of their scale, infrastructural technologies engage with social life at a level where explicit political agendas may playa central role in the design process. The design and building of infrastructure is therefore often an enormously complex feat of sociotechnical engineering, in which technical and political agendas are negotiated together until an outcome is reached that allows the project to move forward. These sociotechnical settlements often result in a complex balancing of powerful interests around infrastructural artifacts; at the same time, less powerful interests have historically often been excluded or marginalized from such settlements.

  7. The Anatomy of Digital Trade Infrastructures

    DEFF Research Database (Denmark)

    Rukanova, Boriana; Zinner Henriksen, Helle; Henningsson, Stefan

    2017-01-01

    In global supply chains information about transactions resides in fragmented pockets within business and government systems. The introduction of digital trade infrastructures (DTI) that transcend organizational and systems domains is driven by the prospect of reducing this information fragmentation......, thereby enabling improved security and efficiency in trade process. To understand the problem at hand and build cumulative knowledge about its resolution a way to conceptualize the different digital trade infrastructure initiatives is needed. This paper develops the Digital Trade Infrastructure Framework...

  8. Critical infrastructure system security and resiliency

    CERN Document Server

    Biringer, Betty; Warren, Drake

    2013-01-01

    Security protections for critical infrastructure nodes are intended to minimize the risks resulting from an initiating event, whether it is an intentional malevolent act or a natural hazard. With an emphasis on protecting an infrastructure's ability to perform its mission or function, Critical Infrastructure System Security and Resiliency presents a practical methodology for developing an effective protection system that can either prevent undesired events or mitigate the consequences of such events.Developed at Sandia National Labs, the authors' analytical approach and

  9. Global Land Transport Infrastructure Requirements

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-06-01

    Over the next four decades, global passenger and freight travel is expected to double over 2010 levels. In order to accommodate this growth, it is expected that the world will need to add nearly 25 million paved road lane-kilometres and 335 000 rail track kilometres. In addition, it is expected that between 45 000 square kilometres and 77 000 square kilometres of new parking spaces will be added to accommodate vehicle stock growth. These land transport infrastructure additions, when combined with operations, maintenance and repairs, are expected to cost as much as USD 45 trillion by 2050. This publication reports on the International Energy Agency’s (IEA) analysis of infrastructure requirements to support projected road and rail travel through 2050, using the IEA Mobility Model. It considers land transport infrastructure additions to support travel growth to 2050. It also considers potential savings if countries pursue “avoid and shift” policies: in this scenario, cumulative global land transport infrastructure spending could decrease as much as USD 20 trillion by 2050 over baseline projections.

  10. The Creation and Development of Innovative Infrastructure in the Danube Countries

    Directory of Open Access Journals (Sweden)

    Liudmila Rosca-Sadurschi

    2014-08-01

    Full Text Available Entrepreneurship development is supported by a developed infrastructure or innovative infrastructure. The purpose of the business infrastructure is to create favorable conditions for its development by providing support in various areas, complete and targeted to businesses. Training system infrastructure provides creation and development of innovation infrastructure objects. Thus, this article will conduct a comparative analysis of the elements of innovation infrastructure and how their development in different countries. Innovation infrastructure elements analyzed are: information infrastructure refers to access to information; Financial infrastructure refers to financial resources; infrastructure, staff training (qualified staff; material and technical infrastructure; infrastructure consulting (expert consultation; marketing infrastructure.

  11. Software engineering architecture-driven software development

    CERN Document Server

    Schmidt, Richard F

    2013-01-01

    Software Engineering: Architecture-driven Software Development is the first comprehensive guide to the underlying skills embodied in the IEEE's Software Engineering Body of Knowledge (SWEBOK) standard. Standards expert Richard Schmidt explains the traditional software engineering practices recognized for developing projects for government or corporate systems. Software engineering education often lacks standardization, with many institutions focusing on implementation rather than design as it impacts product architecture. Many graduates join the workforce with incomplete skil

  12. WRF4G project: Adaptation of WRF Model to Distributed Computing Infrastructures

    Science.gov (United States)

    Cofino, Antonio S.; Fernández Quiruelas, Valvanuz; García Díez, Markel; Blanco Real, Jose C.; Fernández, Jesús

    2013-04-01

    demonstrate the ability of Grid infrastructures in solving a scientific problem with interest and relevance on the meteorology area (implying a high computational cost) we will perform a high resolution hindcast on Southwestern Europe with ERA-Interim re-analysis as boundary and initial conditions. The production of an atmospheric hindcast at high resolution, will provide an appropriate assessment of the possibilities and uncertainties of the WRF model for the evaluation and forecasting of weather, energy and natural hazards. [1] http://www.meteo.unican.es/software/wrf4g

  13. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  14. Building infrastructure to prevent disasters like Hurricane Maria

    Science.gov (United States)

    Bandaragoda, C.; Phuong, J.; Mooney, S.; Stephens, K.; Istanbulluoglu, E.; Pieper, K.; Rhoads, W.; Edwards, M.; Pruden, A.; Bales, J.; Clark, E.; Brazil, L.; Leon, M.; McDowell, W. G.; Horsburgh, J. S.; Tarboton, D. G.; Jones, A. S.; Hutton, E.; Tucker, G. E.; McCready, L.; Peckham, S. D.; Lenhardt, W. C.; Idaszak, R.

    2017-12-01

    2000 words Recovery efforts from natural disasters can be more efficient with data-driven information on current needs and future risks. We aim to advance open-source software infrastructure to support scientific investigation and data-driven decision making with a prototype system using a water quality assessment developed to investigate post-Hurricane Maria drinking water contamination in Puerto Rico. The widespread disruption of water treatment processes and uncertain drinking water quality within distribution systems in Puerto Rico poses risk to human health. However, there is no existing digital infrastructure to scientifically determine the impacts of the hurricane. After every natural disaster, it is difficult to answer elementary questions on how to provide high quality water supplies and health services. This project will archive and make accessible data on environmental variables unique to Puerto Rico, damage caused by Hurricane Maria, and will begin to address time sensitive needs of citizens. The initial focus is to work directly with public utilities to collect and archive samples of biological and inorganic drinking water quality. Our goal is to advance understanding of how the severity of a hazard to human health (e.g., no access to safe culinary water) is related to the sophistication, connectivity, and operations of the physical and related digital infrastructure systems. By rapidly collecting data in the early stages of recovery, we will test the design of an integrated cyberinfrastructure system to for usability of environmental and health data to understand the impacts from natural disasters. We will test and stress the CUAHSI HydroShare data publication mechanisms and capabilities to (1) assess the spatial and temporal presence of waterborne pathogens in public water systems impacted by a natural disaster, (2) demonstrate usability of HydroShare as a clearinghouse to centralize selected datasets related to Hurricane Maria, and (3) develop a

  15. The TENCompetence Infrastructure: A Learning Network Implementation

    Science.gov (United States)

    Vogten, Hubert; Martens, Harrie; Lemmers, Ruud

    The TENCompetence project developed a first release of a Learning Network infrastructure to support individuals, groups and organisations in professional competence development. This infrastructure Learning Network infrastructure was released as open source to the community thereby allowing users and organisations to use and contribute to this development as they see fit. The infrastructure consists of client applications providing the user experience and server components that provide the services to these clients. These services implement the domain model (Koper 2006) by provisioning the entities of the domain model (see also Sect. 18.4) and henceforth will be referenced as domain entity services.

  16. System for critical infrastructure security based on multispectral observation-detection module

    Science.gov (United States)

    Trzaskawka, Piotr; Kastek, Mariusz; Życzkowski, Marek; Dulski, Rafał; Szustakowski, Mieczysław; Ciurapiński, Wiesław; Bareła, Jarosław

    2013-10-01

    Recent terrorist attacks and possibilities of such actions in future have forced to develop security systems for critical infrastructures that embrace sensors technologies and technical organization of systems. The used till now perimeter protection of stationary objects, based on construction of a ring with two-zone fencing, visual cameras with illumination are efficiently displaced by the systems of the multisensor technology that consists of: visible technology - day/night cameras registering optical contrast of a scene, thermal technology - cheap bolometric cameras recording thermal contrast of a scene and active ground radars - microwave and millimetre wavelengths that record and detect reflected radiation. Merging of these three different technologies into one system requires methodology for selection of technical conditions of installation and parameters of sensors. This procedure enables us to construct a system with correlated range, resolution, field of view and object identification. Important technical problem connected with the multispectral system is its software, which helps couple the radar with the cameras. This software can be used for automatic focusing of cameras, automatic guiding cameras to an object detected by the radar, tracking of the object and localization of the object on the digital map as well as target identification and alerting. Based on "plug and play" architecture, this system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provide high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering. The paper presents

  17. MONITORING MECHANISM FOR INVESTMENT DEVELOPMENT OF REGIONS’ INFRASTRUCTURE

    Directory of Open Access Journals (Sweden)

    Halyna Leshuk

    2017-09-01

    Full Text Available The subject of the research is the theoretical and methodological principles of the monitoring mechanism of investment development of regions’ infrastructure. The objectives of the research are the generalization of theoretical and methodological bases of monitoring mechanism of investment development of regions’ infrastructure, as well as analysis of the current trends of investment development of the infrastructure in the regions of Ukraine with the identification of positive and negative trends. Methodology. The article deals with theoretical and methodological approaches to the definition of conceptual foundations of the mechanism of monitoring the investment development of the regions’ infrastructure with the help of general scientific methods of analysis: systematization and generalization, induction, and deduction. Results. It is proposed to interpret a monitor of the investment development of the regional infrastructure (IDRI as a systematic and complex measurement of the indicators of regional infrastructure development, the number of implemented investment projects, monitoring compliance with the developed strategic regional programs and concepts, which will ultimately help to effectively and efficiently regulate the detected deviations and passing the appropriate decisions. The IDRI monitoring mechanism should also provide a possibility of creating a system for collecting and analysing data concerning the assessment of infrastructure objects by the territorial community, which will allow potential investors to focus not only on analytical data on monitoring of regional authorities but also to take into account the public interest in a particular region. The general principles of the monitoring mechanism of investment development of the regions infrastructure are proposed in the following directions: complex and system monitoring and data collection concerning the development of the regions’ infrastructure, while the aggregate

  18. Information infrastructure(s) boundaries, ecologies, multiplicity

    CERN Document Server

    Mongili, Alessandro

    2014-01-01

    This book marks an important contribution to the fascinating debate on the role that information infrastructures and boundary objects play in contemporary life, bringing to the fore the concern of how cooperation across different groups is enabled, but also constrained, by the material and immaterial objects connecting them. As such, the book itself is situated at the crossroads of various paths and genealogies, all focusing on the problem of the intersection between different levels of scale...

  19. Software FMEA analysis for safety-related application software

    International Nuclear Information System (INIS)

    Park, Gee-Yong; Kim, Dong Hoon; Lee, Dong Young

    2014-01-01

    Highlights: • We develop a modified FMEA analysis suited for applying to software architecture. • A template for failure modes on a specific software language is established. • A detailed-level software FMEA analysis on nuclear safety software is presented. - Abstract: A method of a software safety analysis is described in this paper for safety-related application software. The target software system is a software code installed at an Automatic Test and Interface Processor (ATIP) in a digital reactor protection system (DRPS). For the ATIP software safety analysis, at first, an overall safety or hazard analysis is performed over the software architecture and modules, and then a detailed safety analysis based on the software FMEA (Failure Modes and Effect Analysis) method is applied to the ATIP program. For an efficient analysis, the software FMEA analysis is carried out based on the so-called failure-mode template extracted from the function blocks used in the function block diagram (FBD) for the ATIP software. The software safety analysis by the software FMEA analysis, being applied to the ATIP software code, which has been integrated and passed through a very rigorous system test procedure, is proven to be able to provide very valuable results (i.e., software defects) that could not be identified during various system tests

  20. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    Energy Technology Data Exchange (ETDEWEB)

    Duque, Earl P.N. [J.M. Smith International, LLC, Rutherford, NJ (United States). DBA Intelligent Light; Whitlock, Brad J. [J.M. Smith International, LLC, Rutherford, NJ (United States). DBA Intelligent Light

    2017-08-25

    High performance computers have for many years been on a trajectory that gives them extraordinary compute power with the addition of more and more compute cores. At the same time, other system parameters such as the amount of memory per core and bandwidth to storage have remained constant or have barely increased. This creates an imbalance in the computer, giving it the ability to compute a lot of data that it cannot reasonably save out due to time and storage constraints. While technologies have been invented to mitigate this problem (burst buffers, etc.), software has been adapting to employ in situ libraries which perform data analysis and visualization on simulation data while it is still resident in memory. This avoids the need to ever have to pay the costs of writing many terabytes of data files. Instead, in situ enables the creation of more concentrated data products such as statistics, plots, and data extracts, which are all far smaller than the full-sized volume data. With the increasing popularity of in situ, multiple in situ infrastructures have been created, each with its own mechanism for integrating with a simulation. To make it easier to instrument a simulation with multiple in situ infrastructures and include custom analysis algorithms, this project created the SENSEI framework.

  1. Narrating national geo information infrastructures : Balancing infrastructures and innovation

    NARCIS (Netherlands)

    Koerten, H.; Veenswijk, M.

    2009-01-01

    This paper examines narratives relating to the development of National Geo Information Infrastructures (NGII) in eth-nographic research on a Dutch NGII project which was monitored throughout its course. We used an approach which focuses on narratives concerning the environment, groups and practice

  2. School infrastructure performance indicator system (SIPIS)

    CSIR Research Space (South Africa)

    Gibberd, Jeremy T

    2007-05-01

    Full Text Available This paper describes the School Infrastructure Performance Indicator System (SIPIS) project which explores how an indicator system could be developed for school infrastructure in South Africa. It outlines the key challenges faced by the system...

  3. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Gamel, Anton Josef; The ATLAS collaboration

    2017-01-01

    The shared HPC cluster NEMO at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. This concept allows to run both data analysis and production on the HPC host system which is connected to the existing Tier2/Tier3 infrastructure. Schedulers of the two clusters were integrated in a dynamic, on-demand way. An automatically generated, fully functional virtual machine image provides access to the local user environment. The performance in the virtualized environment is evaluated for typical High-Energy Physics applications.

  4. Automated software for hydraulic simulation of pipeline operation

    Directory of Open Access Journals (Sweden)

    Hurgin Roman

    2018-01-01

    Full Text Available Design of modern water supply systems of large cities as well as their management via renovation of hydraulic models poses time-consuming tasks to researchers, and coping with this task requires specific approaches. When tackling these tasks, water services companies come across a lot of information about various objects of water infrastructure, the majority of which is located underground. In those cases, modern computer-aided design systems containing various components come to help. These systems help to solve a wide array of problems using existing information regarding pipelines, analysis and optimization of their basic parameters. CAD software is becoming an integral part of water supply systems management in large cities, and its capabilities allow engineering and operating companies to not only collect all the necessary data concerning water supply systems in any given city, but also to conduct research aimed at improving various parameters of these systems, including optimization of their hydraulic properties which directly determine the quality of water. This paper contains the analysis of automated CAD software for hydraulic design and management of city water supply systems in order to provide safe and efficient operation of these water supply systems. Authors select the most suitable software that might be used to provide hydraulic compatibility of old and new sections of water supply ring mains after selective or continuous draw-in renovation and decrease in diameter of distribution networks against the background of water consumption decrease in the cities.

  5. Transformation of technical infrastructure

    DEFF Research Database (Denmark)

    Nielsen, Susanne Balslev

    , the evolution of large technological systems and theories about organisational and technological transformationprocesses. The empirical work consist of three analysis at three different levels: socio-technical descriptions of each sector, an envestigation of one municipality and envestigations of one workshop......The scope of the project is to investigate the possibillities of - and the barriers for a transformation of technical infrastructure conserning energy, water and waste. It focus on urban ecology as a transformation strategy. The theoretical background of the project is theories about infrastructure...

  6. Automated tools and techniques for distributed Grid Software Development of the testbed infrastructure

    CERN Document Server

    Aguado Sanchez, C

    2007-01-01

    Grid technology is becoming more and more important as the new paradigm for sharing computational resources across different organizations in a secure way. The great powerfulness of this solution, requires the definition of a generic stack of services and protocols and this is the scope of the different Grid initiatives. As a result of international collaborations for its development, the Open Grid Forum created the Open Grid Services Architecture (OGSA) which aims to define the common set of services that will enable interoperability across the different implementations. This master thesis has been developed in this framework, as part of the two European-funded projects ETICS and OMII-Europe. The main objective is to contribute to the design and maintenance of large distributed development projects with the automated tool that enables to implement Software Engineering techniques oriented to achieve an acceptable level of quality at the release process. Specifically, this thesis develops the testbed concept a...

  7. Infrastructural urbanism that learns from place

    DEFF Research Database (Denmark)

    Carruth, Susan

    2015-01-01

    . Conventionally, energy ‘infrastructure’ denotes a physical system of pipes, cables, generators, plants, transformers, sockets, and pylons, however recent architectural research emerging within the loosely defined movement of Infrastructural Urbanism has reframed infrastructure as a symbiotic system of flows...

  8. Assessment of Road Infrastructures Pertaining to Malaysian Experience

    Directory of Open Access Journals (Sweden)

    Samsuddin Norshakina

    2016-01-01

    Full Text Available Road Infrastructures contribute towards many severe accidents and it needs supervision as to improve road safety levels. The numbers of fatalities have increased annually and road authority should seriously consider conducting programs or activities to periodically monitor, restore of improve road infrastructure. Implementation of road safety audits may reduce fatalities among road users and maintain road safety at acceptable standards. This paper is aimed to discuss the aspects of road infrastructure in Malaysia. The research signifies the impact of road hazards during the observations and the impact of road infrastructure types on road accidents. The F050 (Jalan Kluang-Batu Pahat road case study showed that infrastructure risk is closely related with number of accident. As the infrastructure risk increase, the number of road accidents also increase. It was also found that different road zones along Jalan Kluang-Batu Pahat showed different level of intersection volume due to number of road intersection. Thus, it is hoped that by implementing continuous assessment on road infrastructures, it might be able to reduce road accidents and fatalities among drives and the community.

  9. Laboratory and software applications for clinical trials: the global laboratory environment.

    Science.gov (United States)

    Briscoe, Chad

    2011-11-01

    The Applied Pharmaceutical Software Meeting is held annually. It is sponsored by The Boston Society, a not-for-profit organization that coordinates a series of meetings within the global pharmaceutical industry. The meeting generally focuses on laboratory applications, but in recent years has expanded to include some software applications for clinical trials. The 2011 meeting emphasized the global laboratory environment. Global clinical trials generate massive amounts of data in many locations that must be centralized and processed for efficient analysis. Thus, the meeting had a strong focus on establishing networks and systems for dealing with the computer infrastructure to support such environments. In addition to the globally installed laboratory information management system, electronic laboratory notebook and other traditional laboratory applications, cloud computing is quickly becoming the answer to provide efficient, inexpensive options for managing the large volumes of data and computing power, and thus it served as a central theme for the meeting.

  10. Problem-Oriented Simulation Packages and Computational Infrastructure for Numerical Studies of Powerful Gyrotrons

    International Nuclear Information System (INIS)

    Damyanova, M; Sabchevski, S; Vasileva, E; Balabanova, E; Zhelyazkov, I; Dankov, P; Malinov, P

    2016-01-01

    Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed. (paper)

  11. Detection and Identification of People at a Critical Infrastructure Facilities of Trafic Buildings

    Directory of Open Access Journals (Sweden)

    Rastislav PIRNÍK

    2014-12-01

    Full Text Available This paper focuses on identification of persons entering objects of crucial infrastructure and subsequent detection of movement in parts of objects. It explains some of the technologies and approaches to processing specific image information within existing building apparatus. The article describes the proposed algorithm for detection of persons. It brings a fresh approach to detection of moving objects (groups of persons involved in enclosed areas focusing on securing freely accessible places in buildings. Based on the designed algorithm of identification with presupposed utilisation of 3D application, motion trajectory of persons in delimited space can be automatically identified. The application was created in opensource software tool using the OpenCV library.

  12. Development of a public health nursing data infrastructure.

    Science.gov (United States)

    Monsen, Karen A; Bekemeier, Betty; P Newhouse, Robin; Scutchfield, F Douglas

    2012-01-01

    An invited group of national public health nursing (PHN) scholars, practitioners, policymakers, and other stakeholders met in October 2010 identifying a critical need for a national PHN data infrastructure to support PHN research. This article summarizes the strengths, limitations, and gaps specific to PHN data and proposes a research agenda for development of a PHN data infrastructure. Future implications are suggested, such as issues related to the development of the proposed PHN data infrastructure and future research possibilities enabled by the infrastructure. Such a data infrastructure has potential to improve accountability and measurement, to demonstrate the value of PHN services, and to improve population health. © 2012 Wiley Periodicals, Inc.

  13. Investment opportunities in infrastructure regardless of financial crisis

    OpenAIRE

    ILIE Georgeta

    2009-01-01

    During these times of dramatic change and financial market disorder, the challenge of infrastructure development is being drawn more into the highlight. Infrastructure will be rising in importance over the next years. The availability and quality of infrastructure directly affect where business operations are located and expanded. In this context, roads and power generation are the most urgent infrastructure needs. The paper reveals a few economic characteristics of current stage of infrastru...

  14. Digital Trade Infrastructures: A Framework for Analysis

    Directory of Open Access Journals (Sweden)

    Boriana Boriana

    2018-04-01

    Full Text Available In global supply chains, information about transactions resides in fragmented pockets within business and government systems. The lack of reliable, accurate and complete information makes it hard to detect risks (such as safety, security, compliance and commercial risks and at the same time makes international trade inefficient. The introduction of digital infrastructures that transcend organizational and system domains is driven by the prospect of reducing the fragmentation of information, thereby enabling improved security and efficiency in the trading process. This article develops a digital trade infrastructure framework through an empirically grounded analysis of four digital infrastructures in the trade domain, using the conceptual lens of digital infrastructure.

  15. Architecture Design of Healthcare Software-as-a-Service Platform for Cloud-Based Clinical Decision Support Service.

    Science.gov (United States)

    Oh, Sungyoung; Cha, Jieun; Ji, Myungkyu; Kang, Hyekyung; Kim, Seok; Heo, Eunyoung; Han, Jong Soo; Kang, Hyunggoo; Chae, Hoseok; Hwang, Hee; Yoo, Sooyoung

    2015-04-01

    To design a cloud computing-based Healthcare Software-as-a-Service (SaaS) Platform (HSP) for delivering healthcare information services with low cost, high clinical value, and high usability. We analyzed the architecture requirements of an HSP, including the interface, business services, cloud SaaS, quality attributes, privacy and security, and multi-lingual capacity. For cloud-based SaaS services, we focused on Clinical Decision Service (CDS) content services, basic functional services, and mobile services. Microsoft's Azure cloud computing for Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) was used. The functional and software views of an HSP were designed in a layered architecture. External systems can be interfaced with the HSP using SOAP and REST/JSON. The multi-tenancy model of the HSP was designed as a shared database, with a separate schema for each tenant through a single application, although healthcare data can be physically located on a cloud or in a hospital, depending on regulations. The CDS services were categorized into rule-based services for medications, alert registration services, and knowledge services. We expect that cloud-based HSPs will allow small and mid-sized hospitals, in addition to large-sized hospitals, to adopt information infrastructures and health information technology with low system operation and maintenance costs.

  16. Security infrastructures: towards the INDECT system security

    OpenAIRE

    Stoianov, Nikolai; Urueña, Manuel; Niemiec, Marcin; Machník, Petr; Maestro, Gema

    2012-01-01

    This paper provides an overview of the security infrastructures being deployed inside the INDECT project. These security infrastructures can be organized in five main areas: Public Key Infrastructure, Communication security, Cryptography security, Application security and Access control, based on certificates and smartcards. This paper presents the new ideas and deployed testbeds for these five areas. In particular, it explains the hierarchical architecture of the INDECT PKI...

  17. Sandia software guidelines: Software quality planning

    Energy Technology Data Exchange (ETDEWEB)

    1987-08-01

    This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. In consonance with the IEEE Standard for Software Quality Assurance Plans, this volume identifies procedures to follow in producing a Software Quality Assurance Plan for an organization or a project, and provides an example project SQA plan. 2 figs., 4 tabs.

  18. Transportation infrastructure resiliency : a review of transportation infrastructure resiliency in light of future impacts of climate change

    Science.gov (United States)

    2013-08-06

    The threat of global climate change and its impact on our worlds infrastructure is a rapidly growing reality. Particularly, as seen in recent storm events such as Hurricane Katrina and Sandy in the United States, transportation infrastructure is o...

  19. Software

    Energy Technology Data Exchange (ETDEWEB)

    Macedo, R.; Budd, G.; Ross, E.; Wells, P.

    2010-07-15

    The software section of this journal presented new software programs that have been developed to help in the exploration and development of hydrocarbon resources. Software provider IHS Inc. has made additions to its geological and engineering analysis software tool, IHS PETRA, a product used by geoscientists and engineers to visualize, analyze and manage well production, well log, drilling, reservoir, seismic and other related information. IHS PETRA also includes a directional well module and a decline curve analysis module to improve analysis capabilities in unconventional reservoirs. Petris Technology Inc. has developed a software to help manage the large volumes of data. PetrisWinds Enterprise (PWE) helps users find and manage wellbore data, including conventional wireline and MWD core data; analysis core photos and images; waveforms and NMR; and external files documentation. Ottawa-based Ambercore Software Inc. has been collaborating with Nexen on the Petroleum iQ software for steam assisted gravity drainage (SAGD) producers. Petroleum iQ integrates geology and geophysics data with engineering data in 3D and 4D. Calgary-based Envirosoft Corporation has developed a software that reduces the costly and time-consuming effort required to comply with Directive 39 of the Alberta Energy Resources Conservation Board. The product includes an emissions modelling software. Houston-based Seismic Micro-Technology (SMT) has developed the Kingdom software that features the latest in seismic interpretation. Holland-based Joa Oil and Gas and Calgary-based Computer Modelling Group have both supplied the petroleum industry with advanced reservoir simulation software that enables reservoir interpretation. The 2010 software survey included a guide to new software applications designed to facilitate petroleum exploration, drilling and production activities. Oil and gas producers can use the products for a range of functions, including reservoir characterization and accounting. In

  20. Rolling vibes : continuous transport infrastructure monitoring

    NARCIS (Netherlands)

    Seraj, Fatjon

    2017-01-01

    Transport infrastructure is a people to people technology, in the sense that is build by people to serve people, by facilitating transportation, connection and communication. People improved infrastructure by applying simple methods derived from their sensing and thinking. Since the early ages,