WorldWideScience

Sample records for facility infrastructure tier-3

  1. Analysis Facility infrastructure (TIER3) for ATLAS High Energy physics experiment

    International Nuclear Information System (INIS)

    Gonzalez de la Hoz, S.; March, L.; Ros, E.; Sanchez, J.; Amoros, G.; Fassi, F.; Fernandez, A.; Kaci, M.; Lamas, A.; Salt, J.

    2007-01-01

    ATLAS project has been asked to define the scope and role of Tier-3 resources (facilities or centres) within the existing ATLAS computing model, activities and facilities. This document attempts to address these questions by describing Tier-3 resources generally, and their relationship to the ATLAS Software and Computing Project. Originally the tiered computing model came out of MONARC (see http://monarc.web.cern.ch/MONARC/) work and was predicated upon the network being a scarce resource. In this model the tiered hierarchy ranged from the Tier-0 (CERN) down to the desktop or workstation (Tier 3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 (CERN) and Tier-1 (National centres) definition and roles. The various LHC projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2s (Regional centers) as part of their projects. Tier-3s, on the other hand, have (implicitly and sometime explicitly) been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS Research Program computing resources nor under their control, meaning there is no formal MOU process to designate sites as Tier-3s and no formal control of the program over the Tier-3 resources. Tier-3s are the responsibility of individual institutions to define, fund, deploy and support. However, having noted this, we must also recognize that Tier-3s must exist and will have implications for how our computing model should support ATLAS physicists. Tier-3 users will want to access data and simulations and will want to enable their Tier-3 resources to support their analysis and simulation work. Tiers 3s are an important resource for physicists to analyze LHC (Large Hadron Collider) data. This document will define how Tier-3s should best interact with the ATLAS computing model, detail the

  2. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    Science.gov (United States)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  3. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    International Nuclear Information System (INIS)

    Limosani, Antonio; Boland, Lucien; Crosby, Sean; Huang, Joanna; Sevior, Martin; Coddington, Paul; Zhang, Shunde; Wilson, Ross

    2014-01-01

    The Australian Government is making a $AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  4. Analysis facility infrastructure (Tier-3) for ATLAS experiment

    International Nuclear Information System (INIS)

    Gonzalez de la Hoz, S.; March, L.; Ros, E.; Sanchez, J.; Amoros, G.; Fassi, F.; Fernandez, A.; Kaci, M.; Lamas, A.; Salt, J.

    2008-01-01

    In the ATLAS computing model the tiered hierarchy ranged from the Tier-0 (CERN) down to desktops or workstations (Tier-3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 and Tier-1 definition and roles. The various LHC (Large Hadron Collider) projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2's (Regional centers) as part of their projects. Tier-3 centres, on the other hand, have been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS computing resources. However, Tier-3 centres are going to exist and will have implications on how the computing model should support ATLAS physicists. Tier-3 users will want to access LHC data and simulations and will want to enable their resources to support their analysis and simulation work. This document will define how IFIC (Instituto de Fisica Corpuscular de Valencia), after discussing with the ATLAS Tier-3 task force, should interact with the ATLAS computing model, detail the conditions under which Tier-3 centres can expect some level of support and set reasonable expectations for the scope and support of ATLAS Tier-3 sites. (orig.)

  5. Analysis facility infrastructure (Tier-3) for ATLAS experiment

    CERN Document Server

    González de la Hoza, S; Ros, E; Sánchez, J; Amorós, G; Fassi, F; Fernández, A; Kaci, M; Lamas, A; Salt, J

    2008-01-01

    In the ATLAS computing model the tiered hierarchy ranged from the Tier-0 (CERN) down to desktops or workstations (Tier-3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 and Tier-1 definition and roles. The various LHC (Large Hadron Collider) projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2’s (Regional centers) as part of their projects. Tier-3 centres, on the other hand, have been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS computing resources. However, Tier-3 centres are going to exist and will have implications on how the computing model should support ATLAS physicists. Tier-3 users will want to access LHC data and simulations and will want to enable their resources to support their analysis and simulation work. This document will define how IFIC (Insti...

  6. VM-based infrastructure for simulating different cluster and storage solutions used on ATLAS Tier-3 sites

    International Nuclear Information System (INIS)

    Belov, S; Kadochnikov, I; Korenkov, V; Kutouski, M; Oleynik, D; Petrosyan, A

    2012-01-01

    The current ATLAS Tier-3 infrastructure consists of a variety of sites of different sizes and with a mix of local resource management systems (LRMS) and mass storage system (MSS) implementations. The Tier-3 monitoring suite, having been developed in order to satisfy the needs of Tier-3 site administrators and to aggregate Tier-3 monitoring information on the global VO level, needs to be validated for various combinations of LRMS and MSS solutions along with the corresponding Ganglia plugins. For this purpose the testbed infrastructure, which allows simulation of various computational cluster and storage solutions, had been set up at JINR (Dubna, Russia). This infrastructure provides the ability to run testbeds with various LRMS and MSS implementations, and with the capability to quickly redeploy particular testbeds or their components. Performance of specific components is not a critical issue for development and validation, whereas easy management and deployment are crucial. Therefore virtual machines were chosen for implementation of the validation infrastructure which, though initially developed for Tier-3 monitoring project, can be exploited for other purposes. Load generators for simulation of the computing activities at the farm were developed as a part of this task. The paper will cover concrete implementation, including deployment scenarios, hypervisor details and load simulators.

  7. ATLAS Tier-3 within IFIC-Valencia analysis facility

    CERN Document Server

    Villaplana, M; The ATLAS collaboration; Fernández, A; Salt, J; Lamas, A; Fassi, F; Kaci, M; Oliver, E; Sánchez, J; Sánchez-Martínez, V

    2012-01-01

    The ATLAS Tier-3 at IFIC-Valencia is attached to a Tier-2 that has 50% of the Spanish Federated Tier-2 resources. In its design, the Tier-3 includes a GRID-aware part that shares some of the features of IFIC Tier-2 such as using Lustre as a file system. ATLAS users, 70% of IFIC users, also have the possibility of analysing data with a PROOF farm and storing them locally. In this contribution we discuss the design of the analysis facility as well as the monitoring tools we use to control and improve its performance. We also comment on how the recent changes in the ATLAS computing GRID model affect IFIC. Finally, how this complex system can coexist with the other scientific applications running at IFIC (non-ATLAS users) is presented.

  8. INFN Tier-1 Testbed Facility

    International Nuclear Information System (INIS)

    Gregori, Daniele; Cavalli, Alessandro; Dell'Agnello, Luca; Dal Pra, Stefano; Prosperini, Andrea; Ricci, Pierpaolo; Ronchieri, Elisabetta; Sapunenko, Vladimir

    2012-01-01

    INFN-CNAF, located in Bologna, is the Information Technology Center of National Institute of Nuclear Physics (INFN). In the framework of the Worldwide LHC Computing Grid, INFN-CNAF is one of the eleven worldwide Tier-1 centers to store and reprocessing Large Hadron Collider (LHC) data. The Italian Tier-1 provides the resources of storage (i.e., disk space for short term needs and tapes for long term needs) and computing power that are needed for data processing and analysis to the LHC scientific community. Furthermore, INFN Tier-1 houses computing resources for other particle physics experiments, like CDF at Fermilab, SuperB at Frascati, as well as for astro particle and spatial physics experiments. The computing center is a very complex infrastructure, the hardaware layer include the network, storage and farming area, while the software layer includes open source and proprietary software. Software updating and new hardware adding can unexpectedly deteriorate the production activity of the center: therefore a testbed facility has been set up in order to reproduce and certify the various layers of the Tier-1. In this article we describe the testbed and the checks performed.

  9. Managing a tier-2 computer centre with a private cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-01-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI

  10. Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure

    CERN Document Server

    Fernandez, A; The ATLAS collaboration; AMOROS, G; VILLAPLANA, M; FASSI, F; KACI, M; LAMAS, A; OLIVER, E; SALT, J; SANCHEZ, J; SANCHEZ, V

    2012-01-01

    ABSTRAC ISCG 2012 Evolution of the Atlas data and computing model for a Tier2 in the EGI infrastructure During last years the Atlas computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more effic...

  11. Optimization of HEP Analysis Activities Using a Tier2 Infrastructure

    International Nuclear Information System (INIS)

    Arezzini, S; Bagliesi, G; Boccali, T; Ciampa, A; Mazzoni, E; Coscetti, S; Sarkar, S; Taneja, S

    2012-01-01

    While the model for a Tier2 is well understood and implemented within the HEP Community, a refined design for Analysis specific sites has not been agreed upon as clearly. We aim to describe the solutions adopted at the INFN Pisa, the biggest Tier2 in the Italian HEP Community. A Standard Tier2 infrastructure is optimized for Grid CPU and Storage access, while a more interactive oriented use of the resources is beneficial to the final data analysis step. In this step, POSIX file storage access is easier for the average physicist, and has to be provided in a real or emulated way. Modern analysis techniques use advanced statistical tools (like RooFit and RooStat), which can make use of multi core systems. The infrastructure has to provide or create on demand computing nodes with many cores available, above the existing and less elastic Tier2 flat CPU infrastructure. At last, the users do not want to have to deal with data placement policies at the various sites, and hence a transparent WAN file access, again with a POSIX layer, must be provided, making use of the soon-to-be-installed 10 Gbit/s regional lines. Even if standalone systems with such features are possible and exist, the implementation of an Analysis site as a virtual layer over an existing Tier2 requires novel solutions; the ones used in Pisa are described here.

  12. Tier-3 Monitoring Software Suite (T3MON) proposal

    CERN Document Server

    Andreeva, J; The ATLAS collaboration; Klimentov, A; Korenkov, V; Oleynik, D; Panitkin, S; Petrosyan, A

    2011-01-01

    The ATLAS Distributed Computing activities concentrated so far in the “central” part of the computing system of the experiment, namely the first 3 tiers (CERN Tier0, the 10 Tier1s centres and the 60+ Tier2s). This is a coherent system to perform data processing and management on a global scale and host (re)processing, simulation activities down to group and user analysis. Many ATLAS Institutes and National Communities built (or have plans to build) Tier-3 facilities. The definition of Tier-3 concept has been outlined (REFERENCE). Tier-3 centres consist of non-pledged resources mostly dedicated for the data analysis by the geographically close or local scientific groups. Tier-3 sites comprise a range of architectures and many do not possess Grid middleware, which would render application of Tier-2 monitoring systems useless. This document describes a strategy to develop a software suite for monitoring of the Tier3 sites. This software suite will enable local monitoring of the Tier3 sites and the global vie...

  13. Tier II Chemical Storage Facilities

    Data.gov (United States)

    Iowa State University GIS Support and Research FacilityFacilities that store hazardous chemicals above certain quantities must submit an annual emergency and hazardous chemical inventory on a Tier II form. This is a...

  14. Illustrative Example of Distributed Analysis in ATLAS Spanish Tier-2 and Tier-3 centers

    CERN Document Server

    Oliver, E; The ATLAS collaboration; González de la Hoz, S; Kaci, M; Lamas, A; Salt, J; Sánchez, J; Villaplana, M

    2011-01-01

    Data taking in ATLAS has been going on for more than one year. The necessity of a computing infrastructure for data storage, access for thousands of users and process of hundreds of million of events has been confirmed in this period. Fortunately, this task has been managed by the GRID infrastructure and the manpower that also has been developing specific GRID tools for the ATLAS community. An example of a physics analysis, searches for the decay of a heavy resonance into a ttbar pair, using this infrastructure is shown. Concretely using the ATLAS Spanish Tier-2 and the IFIC Tier-3. In this moment, the ATLAS Distributed Computing group is working to improve the connectivity among centers in order to be ready for the foreseen increase on the ATLAS activity in the next years.

  15. Evolution of the ATLAS data and computing model for a Tier2 in the EGI infrastructure

    CERN Document Server

    Fernández Casaní, A; The ATLAS collaboration; González de la Hoz, S; Salt Cairols, J; Fassi, F; Kaci, M; Lamas, A; Oliver, E; Sánchez, J; Sánchez, V

    2012-01-01

    Since the start of the LHC pp collisions in 2010, the ATLAS computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more efficiently. In this way Tier1s and Tier2s are becoming more equivalent for t...

  16. Status and Trends in Networking at LHC Tier1 Facilities

    Science.gov (United States)

    Bobyshev, A.; DeMar, P.; Grigaliunas, V.; Bigrow, J.; Hoeft, B.; Reymund, A.

    2012-12-01

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: • Evolution of Tier1 centers to their current state • Evolving data center networking models and how they apply to Tier1 centers • Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers • Trends in WAN data movement and emergence of software-defined WAN network capabilities • Network virtualization

  17. Status and Trends in Networking at LHC Tier1 Facilities

    International Nuclear Information System (INIS)

    Bobyshev, A; DeMar, P; Grigaliunas, V; Bigrow, J; Hoeft, B; Reymund, A

    2012-01-01

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: • Evolution of Tier1 centers to their current state • Evolving data center networking models and how they apply to Tier1 centers • Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers • Trends in WAN data movement and emergence of software-defined WAN network capabilities • Network virtualization

  18. Status and trends in networking at LHC Tier1 facilities

    Energy Technology Data Exchange (ETDEWEB)

    Bobyshev, A. [Fermilab; DeMar, P. [Fermilab; Grigaliunas, V. [Fermilab; Bigrow, J. [Brookhaven; Hoeft, B. [KIT, Karlsruhe; Reymund, A. [KIT, Karlsruhe

    2012-06-22

    The LHC is entering its fourth year of production operation. Most Tier1 facilities have been in operation for almost a decade, when development and ramp-up efforts are included. LHC's distributed computing model is based on the availability of high capacity, high performance network facilities for both the WAN and LAN data movement, particularly within the Tier1 centers. As a result, the Tier1 centers tend to be on the leading edge of data center networking technology. In this paper, we analyze past and current developments in Tier1 LAN networking, as well as extrapolating where we anticipate networking technology is heading. Our analysis will include examination into the following areas: Evolution of Tier1 centers to their current state Evolving data center networking models and how they apply to Tier1 centers Impact of emerging network technologies (e.g. 10GE-connected hosts, 40GE/100GE links, IPv6) on Tier1 centers Trends in WAN data movement and emergence of software-defined WAN network capabilities Network virtualization

  19. Analysis of internal network requirements for the distributed Nordic Tier-1

    DEFF Research Database (Denmark)

    Behrmann, G.; Fischer, L.; Gamst, Mette

    2010-01-01

    The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF) differs significantly from other Tier-1s in several aspects: It is not located at one or a few locations but instead distributed throughout the Nordic, it is not under the governance of a single organisation but but is instead...... build from resources under the control of a number of different national organisations. Being physically distributed makes the design and implementation of the networking infrastructure a challenge. NDGF has its own internal OPN connecting the sites participating in the distributed Tier-1. To assess...

  20. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    Energy Technology Data Exchange (ETDEWEB)

    Zynovyev, Mykhaylo

    2012-06-29

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  1. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    International Nuclear Information System (INIS)

    Zynovyev, Mykhaylo

    2012-01-01

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  2. Security audits of multi-tier virtual infrastructures in public infrastructure clouds

    DEFF Research Database (Denmark)

    Bleikertz, Sören; Schunter, Matthias; Probst, Christian W.

    2010-01-01

    Cloud computing has gained remarkable popularity in the recent years by a wide spectrum of consumers, ranging from small start-ups to governments. However, its benefits in terms of flexibility, scalability, and low upfront investments, are shadowed by security challenges which inhibit its adoption....... Managed through a web-services interface, users can configure highly flexible but complex cloud computing environments. Furthermore, users misconfiguring such cloud services poses a severe security risk that can lead to security incidents, e.g., erroneous exposure of services due to faulty network...... security configurations. In this article we present a novel approach in the security assessment of the end-user configuration of multi-tier architectures deployed on infrastructure clouds such as Amazon EC2. In order to perform this assessment for the currently deployed configuration, we automated...

  3. The US-CMS Tier-1 Center Network Evolving toward 100Gbps

    International Nuclear Information System (INIS)

    Bobyshev, A; DeMar, P

    2011-01-01

    Fermilab hosts the US Tier-1 Center for the LHC's Compact Muon Collider (CMS) experiment. The Tier-1s are the central points for the processing and movement of LHC data. They sink raw data from the Tier-0 at CERN, process and store it locally, and then distribute the processed data to Tier-2s for simulation studies and analysis. The Fermilab Tier-1 Center is the largest of the CMS Tier-1s, accounting for roughly 35% of the experiment's Tier-1 computing and storage capacity. Providing capacious, resilient network services, both in terms of local network infrastructure and off-site data movement capabilities, presents significant challenges. This article will describe the current architecture, status, and near term plans for network support of the US-CMS Tier-1 facility.

  4. Distributed Analysis Experience using Ganga on an ATLAS Tier2 infrastructure

    International Nuclear Information System (INIS)

    Fassi, F.; Cabrera, S.; Vives, R.; Fernandez, A.; Gonzalez de la Hoz, S.; Sanchez, J.; March, L.; Salt, J.; Kaci, M.; Lamas, A.; Amoros, G.

    2007-01-01

    The ATLAS detector will explore the high-energy frontier of Particle Physics collecting the proton-proton collisions delivered by the LHC (Large Hadron Collider). Starting in spring 2008, the LHC will produce more than 10 Peta bytes of data per year. The adapted tiered hierarchy for computing model at the LHC is: Tier-0 (CERN), Tiers-1 and Tiers-2 centres distributed around the word. The ATLAS Distributed Analysis (DA) system has the goal of enabling physicists to perform Grid-based analysis on distributed data using distributed computing resources. IFIC Tier-2 facility is participating in several aspects of DA. In support of the ATLAS DA activities a prototype is being tested, deployed and integrated. The analysis data processing applications are based on the Athena framework. GANGA, developed by LHCb and ATLAS experiments, allows simple switching between testing on a local batch system and large-scale processing on the Grid, hiding Grid complexities. GANGA deals with providing physicists an integrated environment for job preparation, bookkeeping and archiving, job splitting and merging. The experience with the deployment, configuration and operation of the DA prototype will be presented. Experiences gained of using DA system and GANGA in the Top physics analysis will be described. (Author)

  5. STAR load balancing and tiered-storage infrastructure strategy for ultimate db access

    International Nuclear Information System (INIS)

    Arkhipkin, D; Lauret, J; Betts, W; Didenko, L; Van Buren, G

    2011-01-01

    In recent years, the STAR experiment's database demands have grown in accord not only with simple facility growth, but also with a growing physics program. In addition to the accumulated metadata from a decade of operations, refinements to detector calibrations force user analysis to access database information post data production. Users may access any year's data at any point in time, causing a near random access of the metadata queried, contrary to time-organized production cycles. Moreover, complex online event selection algorithms created a query scarcity ( s parsity ) scenario for offline production further impacting performance. Fundamental changes in our hardware approach were hence necessary to improve query speed. Initial strategic improvements were focused on developing fault-tolerant, load-balanced access to a multi-slave infrastructure. Beyond that, we explored, tested and quantified the benefits of introducing a Tiered storage architecture composed of conventional drives, solid-state disks, and memory-resident databases as well as leveraging the use of smaller database services fitting in memory. The results of our extensive testing in real life usage are presented.

  6. Integrated Facilities and Infrastructure Plan.

    Energy Technology Data Exchange (ETDEWEB)

    Reisz Westlund, Jennifer Jill

    2017-03-01

    Our facilities and infrastructure are a key element of our capability-based science and engineering foundation. The focus of the Integrated Facilities and Infrastructure Plan is the development and implementation of a comprehensive plan to sustain the capabilities necessary to meet national research, design, and fabrication needs for Sandia National Laboratories’ (Sandia’s) comprehensive national security missions both now and into the future. A number of Sandia’s facilities have reached the end of their useful lives and many others are not suitable for today’s mission needs. Due to the continued aging and surge in utilization of Sandia’s facilities, deferred maintenance has continued to increase. As part of our planning focus, Sandia is committed to halting the growth of deferred maintenance across its sites through demolition, replacement, and dedicated funding to reduce the backlog of maintenance needs. Sandia will become more agile in adapting existing space and changing how space is utilized in response to the changing requirements. This Integrated Facilities & Infrastructure (F&I) Plan supports the Sandia Strategic Plan’s strategic objectives, specifically Strategic Objective 2: Strengthen our Laboratories’ foundation to maximize mission impact, and Strategic Objective 3: Advance an exceptional work environment that enables and inspires our people in service to our nation. The Integrated F&I Plan is developed through a planning process model to understand the F&I needs, analyze solution options, plan the actions and funding, and then execute projects.

  7. The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity

    Science.gov (United States)

    Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo

    2015-05-01

    The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.

  8. Readiness of the ATLAS Spanish Federated Tier-2 for the Physics Analysis of the early collision events at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, E; Amoros, G; Fassi, F; Fernandez, A; Gonzalez, S; Kaci, M; Lamas, A; Salt, J; Sanchez, J [Instituto de Fisica Corpuscular (IFIC) (centro mixto CSIC - University Valencia), E-46071 Valencia (Spain); Nadal, J; Borrego, C; Campos, M; Pacheco, A [Institut de Fisica d' Altes Energies (IFAE) Facultat de Ciencies UAB, E-08193 Bellaterra, Barcelona (Spain); Pardo, J; Del Cano, L; Peso, J Del; Fernandez, P; March, L; Munoz, L [Universidad Autonoma de Madrid (UAM) Dpto. de Fisica Teorica, 28049 Madrid (Spain); Espinal, X, E-mail: elena.oliver@ific.uv.e [Port d' Informacio CientIfica (PIC) Campus UAB Edifici D E-08193 Bellaterra, Barcelona (Spain)

    2010-04-01

    In this contribution an evaluation of the readiness parameters for the Spanish ATLAS Federated Tier-2 is presented, regarding the ATLAS data taking which is expected to start by the end of the year 2009. Special attention will be paid to the Physics Analysis from different points of view: Data Management, Simulated events Production and Distributed Analysis Tests. Several use cases of Distributed Analysis in GRID infrastructures and local interactive analysis in non-Grid farms, are provided, in order to evaluate the interoperability between both environments, and to compare the different performances. The prototypes for local computing infrastructures for data analysis are described. Moreover, information about a local analysis facilities, called Tier-3, is given.

  9. A retrospective tiered environmental assessment of the Mount Storm Wind Energy Facility, West Virginia,USA

    Energy Technology Data Exchange (ETDEWEB)

    Efroymson, Rebecca Ann [ORNL; Day, Robin [No Affiliation; Strickland, M. Dale [Western EcoSystems Technology

    2012-11-01

    Bird and bat fatalities from wind energy projects are an environmental and public concern, with post-construction fatalities sometimes differing from predictions. Siting facilities in this context can be a challenge. In March 2012 the U.S. Fish and Wildlife Service (USFWS) released Land-based Wind Energy Guidelines to assess collision fatalities and other potential impacts to species of concern and their habitats to aid in siting and management. The Guidelines recommend a tiered approach for assessing risk to wildlife, including a preliminary site evaluation that may evaluate alternative sites, a site characterization, field studies to document wildlife and habitat and to predict project impacts, post construction studies to estimate impacts, and other post construction studies. We applied the tiered assessment framework to a case study site, the Mount Storm Wind Energy Facility in Grant County, West Virginia, USA, to demonstrate the use of the USFWS assessment approach, to indicate how the use of a tiered assessment framework might have altered outputs of wildlife assessments previously undertaken for the case study site, and to assess benefits of a tiered ecological assessment framework for siting wind energy facilities. The conclusions of this tiered assessment for birds are similar to those of previous environmental assessments for Mount Storm. This assessment found risk to individual migratory tree-roosting bats that was not emphasized in previous preconstruction assessments. Differences compared to previous environmental assessments are more related to knowledge accrued in the past 10 years rather than to the tiered structure of the Guidelines. Benefits of the tiered assessment framework include good communication among stakeholders, clear decision points, a standard assessment trajectory, narrowing the list of species of concern, improving study protocols, promoting consideration of population-level effects, promoting adaptive management through post

  10. Evolution of the Building Management System in the INFN CNAF Tier-1 data center facility.

    Science.gov (United States)

    Ricci, Pier Paolo; Donatelli, Massimo; Falabella, Antonio; Mazza, Andrea; Onofri, Michele

    2017-10-01

    The INFN CNAF Tier-1 data center is composed by two different main rooms containing IT resources and four additional locations that hosts the necessary technology infrastructures providing the electrical power and cooling to the facility. The power supply and continuity are ensured by a dedicated room with three 15,000 to 400 V transformers in a separate part of the principal building and two redundant 1.4MW diesel rotary uninterruptible power supplies. The cooling is provided by six free cooling chillers of 320 kW each with a N+2 redundancy configuration. Clearly, considering the complex physical distribution of the technical plants, a detailed Building Management System (BMS) was designed and implemented as part of the original project in order to monitor and collect all the necessary information and for providing alarms in case of malfunctions or major failures. After almost 10 years of service, a revision of the BMS system was somewhat necessary. In addition, the increasing cost of electrical power is nowadays a strong motivation for improving the energy efficiency of the infrastructure. Therefore the exact calculation of the power usage effectiveness (PUE) metric has become one of the most important factors when aiming for the optimization of a modern data center. For these reasons, an evolution of the BMS system was designed using the Schneider StruxureWare infrastructure hardware and software products. This solution proves to be a natural and flexible development of the previous TAC Vista software with advantages in the ease of use and the possibility to customize the data collection and the graphical interfaces display. Moreover, the addition of protocols like open standard Web services gives the possibility to communicate with the BMS from custom user application and permits the exchange of data and information through the Web between different third-party systems. Specific Web services SOAP requests has been implemented in our Tier-1 monitoring system in

  11. ATLAS off-Grid sites (Tier 3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    CERN Document Server

    PETROSYAN, A; The ATLAS collaboration; BELOV, S; ANDREEVA, J; KADOCHNIKOV, I

    2012-01-01

    The ATLAS Distributed Computing activities have so far concentrated in the "central" part of the experiment computing system, namely the first 3 tiers (the CERN Tier0, 10 Tier1 centers and over 60 Tier2 sites). Many ATLAS Institutes and National Communities have deployed (or intend to) deploy Tier-3 facilities. Tier-3 centers consist of non-pledged resources, which are usually dedicated to data analysis tasks by the geographically close or local scientific groups, and which usually comprise a range of architectures without Grid middleware. Therefore a substantial part of the ATLAS monitoring tools which make use of Grid middleware, cannot be used for a large fraction of Tier3 sites. The presentation will describe the T3mon project, which aims to develop a software suite for monitoring the Tier3 sites, both from the perspective of the local site administrator and that of the ATLAS VO, thereby enabling the global view of the contribution from Tier3 sites to the ATLAS computing activities. Special attention in p...

  12. The Legnaro-Padova distributed Tier-2: challenges and results

    Science.gov (United States)

    Badoer, Simone; Biasotto, Massimo; Costa, Fulvia; Crescente, Alberto; Fantinel, Sergio; Ferrari, Roberto; Gulmini, Michele; Maron, Gaetano; Michelotto, Michele; Sgaravatto, Massimo; Toniolo, Nicola

    2014-06-01

    the Tier-2 operations team. Finally we discuss about the foreseen developments of the existing infrastructure. This includes in particular the evolution from a Grid-based resource towards a Cloud-based computing facility.

  13. The Legnaro-Padova distributed Tier-2: challenges and results

    International Nuclear Information System (INIS)

    Badoer, Simone; Biasotto, Massimo; Fantinel, Sergio

    2014-01-01

    the Tier-2 operations team. Finally we discuss about the foreseen developments of the existing infrastructure. This includes in particular the evolution from a Grid-based resource towards a Cloud-based computing facility.

  14. Report on Tier-0 Scaling Tests

    CERN Multimedia

    M. Branco; L. Goossens; A. Nairz

    To get prepared for handling the enormous data rates and volumes during LHC operation, ATLAS is currently running so-called Tier-0 Scaling Tests, which were started beginning of November and will last until Christmas. These tests are carried out in the context of LCG (LHC Computing Grid) Service Challenge 3 (SC3), a joint exercise of CERN IT and the LHC experiments to test the infrastructure of computing, network, and data management, in particular for its architecture, scalabilty and readiness for LHC data taking. ATLAS has adopted a multi-Tier hierarchical model to organise the workflow, with dedicated tasks to be performed at the individual levels in the Tier hierarchy. The Tier-0 centre located at CERN will be responsible for performing a first-pass reconstruction of the data arriving from the Event Filter farm, thus producing Event Summary Data (ESDs), Analysis Object Data (AODs) and event Tags, for processing calibration and alignment information, for archiving both raw and reconstructed data, and for ...

  15. Considerations on Optimal Financial Invest ment into Infrastructural Facilities

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The enlargement of government's investment into infrastructural construction is both a help medicine curing economic contraction and an effective measure to accumulate long-term economic growth.. However, the investment by finance into infrastructure also has a problem of optimization and reasonable selection. In view of market economic requirements, the policy direction of financial investment into infrastructural industries must be doing something at the expense of some other things. In the process of the adjustment and optimization of economic structure, state financial investment into infrastructural facilities has to first of all solve the problem of delimitating the best fields and selecting trades. As to the infrastructure facilities producing and selling pure public products, the development must be made by financial investment;As to the production fields of subpublic products, finance should ensure reasonable investment; As to the infrastructural facilities of pure privite production, finance should completely, in principle, pull out and let market supply. On this basis, selections should be made on best capital soureces and investment ways. The capital sources should be mainly from tax and regulational income and direct investment may be made. As to the production fields of most subpublic production, the best capital sources are national debt income and indirect investment may be made. In addition, the optimization of financial investment into infrastructural facilities must reform the managerial system of infrastructural facilities and raise investment efficiency. Only by scientifically selecting and arranging the financing ways and managerial system in investment fields,can the maximum economic efficiency and social welfare results be realized in carrying out financial investment into infrastructural facilities.

  16. Infrastructural challenges to better health in maternity facilities in rural Kenya: community and healthworker perceptions.

    Science.gov (United States)

    Essendi, Hildah; Johnson, Fiifi Amoako; Madise, Nyovani; Matthews, Zoe; Falkingham, Jane; Bahaj, Abubakr S; James, Patrick; Blunden, Luke

    2015-11-09

    The efforts and commitments to accelerate progress towards the Millennium Development Goals for maternal and newborn health (MDGs 4 and 5) in low and middle income countries have focused primarily on providing key medical interventions at maternity facilities to save the lives of women at the time of childbirth, as well as their babies. However, in most rural communities in sub-Saharan, access to maternal and newborn care services is still limited and even where services are available they often lack the infrastructural prerequisites to function at the very basic level in providing essential routine health care services, let alone emergency care. Lists of essential interventions for normal and complicated childbirth, do not take into account these prerequisites, thus the needs of most health facilities in rural communities are ignored, although there is enough evidence that maternal and newborn deaths continue to remain unacceptably high in these areas. This study uses data gathered through qualitative interviews in Kitonyoni and Mwania sub-locations of Makueni County in Eastern Kenya to understand community and provider perceptions of the obstacles faced in providing and accessing maternal and newborn care at health facilities in their localities. The study finds that the community perceives various challenges, most of which are infrastructural, including lack of electricity, water and poor roads that adversely impact the provision and access to essential life-saving maternal and newborn care services in the two sub-locations. The findings and recommendations from this study are important for the attention of policy makers and programme managers in order to improve the state of lower-tier health facilities serving rural communities and to strengthen infrastructure with the aim of making basic routine and emergency obstetric and newborn care services more accessible.

  17. VM-based infrastructure for simulating different cluster and storage solutions in ATLAS

    CERN Document Server

    KUTOUSKI, M; The ATLAS collaboration; PETROSYAN, A; KADOCHNIKOV, I; BELOV, S; KORENKOV, V

    2012-01-01

    The current ATLAS Tier3 infrastructure consists of a variety of sites of different sizes and with a mix of local resource management systems (LRMS) and mass storage system (MSS) implementations. The Tier3 monitoring suite, having been developed in order to satisfy the needs of Tier3 site administrators and to aggregate Tier3 monitoring information on the global VO level, needs to be validated for various combinations of LRMS and MSS solutions along with the corresponding Ganglia and/or Nagios plugins. For this purpose the Testbed infrastructure, which allows simulation of various computational cluster and storage solutions, had been set up at JINR (Dubna). This infrastructure provides the ability to run testbeds with various LRMS and MSS implementations, and with the capability to quickly redeploy particular testbeds or their components. Performance of specific components is not a critical issue for development and validation, whereas easy management and deployment are crucial. Therefore virtual machines were...

  18. Computing infrastructure for ATLAS data analysis in the Italian Grid cloud

    International Nuclear Information System (INIS)

    Andreazza, A; Annovi, A; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Campana, S; Girolamo, A Di; Carlino, G; Doria, A; Merola, L; Musto, E; Ciocca, C; Jha, M K; Cobal, M; Pascolo, F; Salvo, A De; Luminari, L; Sanctis, U De; Galeazzi, F

    2011-01-01

    ATLAS data are distributed centrally to Tier-1 and Tier-2 sites. The first stages of data selection and analysis take place mainly at Tier-2 centres, with the final, iterative and interactive, stages taking place mostly at Tier-3 clusters. The Italian ATLAS cloud consists of a Tier-1, four Tier-2s, and Tier-3 sites at each institute. Tier-3s that are grid-enabled are used to test code that will then be run on a larger scale at Tier-2s. All Tier-3s offer interactive data access to their users and the possibility to run PROOF. This paper describes the hardware and software infrastructure choices taken, the operational experience after 10 months of LHC data, and discusses site performances.

  19. A Distributed Tier-1

    DEFF Research Database (Denmark)

    Fischer, Lars; Grønager, Michael; Kleist, Josva

    2008-01-01

    The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF) differs significantly from other Tier-1s in several aspects: firstly, it is not located at one or a few premises, but instead is distributed throughout the Nordic countries; secondly, it is not under the governance of a single...... organization but instead is a meta-center built of resources under the control of a number of different national organizations. We present some technical implications of these aspects as well as the high-level design of this distributed Tier-1. The focus will be on computing services, storage and monitoring....

  20. A distributed Tier-1

    Science.gov (United States)

    Fischer, L.; Grønager, M.; Kleist, J.; Smirnova, O.

    2008-07-01

    The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF) differs significantly from other Tier-1s in several aspects: firstly, it is not located at one or a few premises, but instead is distributed throughout the Nordic countries; secondly, it is not under the governance of a single organization but instead is a meta-center built of resources under the control of a number of different national organizations. We present some technical implications of these aspects as well as the high-level design of this distributed Tier-1. The focus will be on computing services, storage and monitoring.

  1. Visits to Tier-1 Computing Centres

    CERN Multimedia

    Dario Barberis

    At the beginning of 2007 it became clear that an enhanced level of communication is needed between the ATLAS computing organisation and the Tier-1 centres. Most usual meetings are ATLAS-centric and cannot address the issues of each Tier-1; therefore we decided to organise a series of visits to the Tier-1 centres and focus on site issues. For us, ATLAS computing management, it is most useful to realize how each Tier-1 centre is organised, and its relation to the associated Tier-2s; indeed their presence at these visits is also very useful. We hope it is also useful for sites... at least, we are told so! The usual participation includes, from the ATLAS side: computing management, operations, data placement, resources, accounting and database deployment coordinators; and from the Tier-1 side: computer centre management, system managers, Grid infrastructure people, network, storage and database experts, local ATLAS liaison people and representatives of the associated Tier-2s. Visiting Tier-1 centres (1-4). ...

  2. 1990 Tier Two emergency and hazardous chemical inventory

    International Nuclear Information System (INIS)

    1991-03-01

    This document contains the 1990 Two Tier Emergency and Hazardous Chemical Inventory. Submission of this Tier Two form (when requested) is required by Title 3 of the Superfund Amendments and Reauthorization Act of 1986, Section 312, Public Law 99--499, codified at 42 U.S.C. Section 11022. The purpose of this Tier Two form is to provide State and local officials and the public with specific information on hazardous chemicals present at your facility during the past year

  3. The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers

    International Nuclear Information System (INIS)

    Bonacorsi, D; Bartolome, I Cabrillo; Matorras, F; Gonzalez Caballero, I; Sartirana, A

    2010-01-01

    Approaching LHC data taking, the CMS experiment is deploying, commissioning and operating the building tools of its grid-based computing infrastructure. The commissioning program includes testing, deployment and operation of various storage solutions to support the computing workflows of the experiment. Recently, some of the Tier-1 and Tier-2 centers supporting the collaboration have started to deploy StoRM based storage systems. These are POSIX-based disk storage systems on top of which StoRM implements the Storage Resource Manager (SRM) version 2 interface allowing for a standard-based access from the Grid. In this notes we briefly describe the experience so far achieved at the CNAF Tier-1 center and at the IFCA Tier-2 center.

  4. Large Scale Commissioning and Operational Experience with Tier-2 to Tier-2 Data Transfer Links in CMS

    CERN Document Server

    Letts, James

    2010-01-01

    Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model...

  5. Onsite and Electric Backup Capabilities at Critical Infrastructure Facilities in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, Julia A. [Argonne National Lab. (ANL), Argonne, IL (United States); Wallace, Kelly E. [Argonne National Lab. (ANL), Argonne, IL (United States); Kudo, Terence Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Eto, Joseph H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-04-01

    The following analysis, conducted by Argonne National Laboratory’s (Argonne’s) Risk and Infrastructure Science Center (RISC), details an analysis of electric power backup of national critical infrastructure as captured through the Department of Homeland Security’s (DHS’s) Enhanced Critical Infrastructure Program (ECIP) Initiative. Between January 1, 2011, and September 2014, 3,174 ECIP facility surveys have been conducted. This study focused first on backup capabilities by infrastructure type and then expanded to infrastructure type by census region.

  6. An integrated tiered service delivery model (ITSDM based on local CD4 testing demands can improve turn-around times and save costs whilst ensuring accessible and scalable CD4 services across a national programme.

    Directory of Open Access Journals (Sweden)

    Deborah K Glencross

    Full Text Available The South African National Health Laboratory Service (NHLS responded to HIV treatment initiatives with two-tiered CD4 laboratory services in 2004. Increasing programmatic burden, as more patients access anti-retroviral therapy (ART, has demanded extending CD4 services to meet increasing clinical needs. The aim of this study was to review existing services and develop a service-model that integrated laboratory-based and point-of-care testing (POCT, to extend national coverage, improve local turn-around/(TAT and contain programmatic costs.NHLS Corporate Data Warehouse CD4 data, from 60-70 laboratories and 4756 referring health facilities was reviewed for referral laboratory workload, respective referring facility volumes and related TAT, from 2009-2012.An integrated tiered service delivery model (ITSDM is proposed. Tier-1/POCT delivers CD4 testing at single health-clinics providing ART in hard-to-reach areas (350-1500 tests/day, serving ≥ 200 health-clinics. Tier-6 provides national support for standardisation, harmonization and quality across the organization.The ITSDM offers improved local TAT by extending CD4 services into rural/remote areas with new Tier-3 or Tier-2/POC-Hub services installed in existing community laboratories, most with developed infrastructure. The advantage of lower laboratory CD4 costs and use of existing infrastructure enables subsidization of delivery of more expensive POC services, into hard-to-reach districts without reasonable access to a local CD4 laboratory. Full ITSDM implementation across 5 service tiers (as opposed to widespread implementation of POC testing to extend service can facilitate sustainable 'full service coverage' across South Africa, and save>than R125 million in HIV/AIDS programmatic costs. ITSDM hierarchical parental-support also assures laboratory/POC management, equipment maintenance, quality control and on-going training between tiers.

  7. Searches for beyond the Standard Model physics with boosted topologies in the ATLAS experiment using the Grid-based Tier-3 facility at IFIC-Valencia

    CERN Document Server

    Villaplana Pérez, Miguel; Vos, Marcel

    Both the LHC and ATLAS have been performing well beyond expectation since the start of the data taking by the end of 2009. Since then, several thousands of millions of collision events have been recorded by the ATLAS experiment. With a data taking efficiency higher than 95% and more than 99% of its channels working, ATLAS supplies data with an unmatched quality. In order to analyse the data, the ATLAS Collaboration has designed a distributed computing model based on GRID technologies. The ATLAS computing model and its evolution since the start of the LHC is discussed in section 3.1. The ATLAS computing model groups the different types of computing centers of the ATLAS Collaboration in a tiered hierarchy that ranges from Tier-0 at CERN, down to the 11 Tier-1 centers and the nearly 80 Tier-2 centres distributed world wide. The Spanish Tier-2 activities during the first years of data taking are described in section 3.2. Tier-3 are institution-level non-ATLAS funded or controlled centres that participate presuma...

  8. Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

    International Nuclear Information System (INIS)

    Letts, J; Magini, N

    2011-01-01

    Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model, and already represents an important component of CMS PhEDEx data transfer volume. The experience, challenges and methods used to debug and commission the thousands of data transfers links between CMS Tier-2 sites world-wide are explained and summarized. The resulting operational experience with Tier-2 to Tier-2 transfers is also presented.

  9. Cryogenic infrastructure for Fermilab's ILC vertical cavity test facility

    International Nuclear Information System (INIS)

    Carcagno, R.; Ginsburg, C.; Huang, Y.; Norris, B.; Ozelis, J.; Peterson, T.; Poloubotko, V.; Rabehl, R.; Sylvester, C.; Wong, M.; Fermilab

    2006-01-01

    Fermilab is building a Vertical Cavity Test Facility (VCTF) to provide for R and D and pre-production testing of bare 9-cell, 1.3-GHz superconducting RF (SRF) cavities for the International Linear Collider (ILC) program. This facility is located in the existing Industrial Building 1 (IB1) where the Magnet Test Facility (MTF) also resides. Helium and nitrogen cryogenics are shared between the VCTF and MTF including the existing 1500-W at 4.5-K helium refrigerator with vacuum pumping for super-fluid operation (125-W capacity at 2-K). The VCTF is being constructed in multiple phases. The first phase is scheduled for completion in mid 2007, and includes modifications to the IB1 cryogenic infrastructure to allow helium cooling to be directed to either the VCTF or MTF as scheduling demands require. At this stage, the VCTF consists of one Vertical Test Stand (VTS) cryostat for the testing of one cavity in a 2-K helium bath. Planning is underway to provide a total of three Vertical Test Stands at VCTF, each capable of accommodating two cavities. Cryogenic infrastructure improvements necessary to support these additional VCTF test stands include a dedicated ambient temperature vacuum pump, a new helium purification skid, and the addition of helium gas storage. This paper describes the system design and initial cryogenic operation results for the first VCTF phase, and outlines future cryogenic infrastructure upgrade plans for expanding to three Vertical Test Stands

  10. CRYOGENIC INFRASTRUCTURE FOR FERMILAB'S ILC VERTICAL CAVITY TEST FACILITY

    International Nuclear Information System (INIS)

    Carcagno, R.; Ginsburg, C.; Huang, Y.; Norris, B.; Ozelis, J.; Peterson, T.; Poloubotko, V.; Rabehl, R.; Sylvester, C.; Wong, M.

    2008-01-01

    Fermilab is building a Vertical Cavity Test Facility (VCTF) to provide for R and D and pre-production testing of bare 9-cell, 1.3-GHz superconducting RF (SRF) cavities for the International Linear Collider (ILC) program. This facility is located in the existing Industrial Building 1 (IB1) where the Magnet Test Facility (MTF) also resides. Helium and nitrogen cryogenics are shared between the VCTF and MTF including the existing 1500-W at 4.5-K helium refrigerator with vacuum pumping for super-fluid operation (125-W capacity at 2-K). The VCTF is being constructed in multiple phases. The first phase is scheduled for completion in mid 2007, and includes modifications to the IB1 cryogenic infrastructure to allow helium cooling to be directed to either the VCTF or MTF as scheduling demands require. At this stage, the VCTF consists of one Vertical Test Stand (VTS) cryostat for the testing of one cavity in a 2-K helium bath. Planning is underway to provide a total of three Vertical Test Stands at VCTF, each capable of accommodating two cavities. Cryogenic infrastructure improvements necessary to support these additional VCTF test stands include a dedicated ambient temperature vacuum pump, a new helium purification skid, and the addition of helium gas storage. This paper describes the system design and initial cryogenic operation results for the first VCTF phase, and outlines future cryogenic infrastructure upgrade plans for expanding to three Vertical Test Stands

  11. Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility

    Science.gov (United States)

    Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer

    2009-01-01

    Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits

  12. ICAT: Integrating data infrastructure for facilities based science

    International Nuclear Information System (INIS)

    Flannery, Damian; Matthews, Brian; Griffin, Tom; Bicarregui, Juan; Gleaves, Michael; Lerusse, Laurent; Downing, Roger; Ashton, Alun; Sufi, Shoaib; Drinkwater, Glen; Kleese van Dam, Kerstin

    2009-01-01

    ICAT: Integrating data infrastructure for facilities based science Damian Flannery, Brian Matthews, Tom Griffin, Juan Bicarregui, Michael Gleaves, Laurent Lerusse, Roger Downing, Alun Ashton, Shoaib Sufi, Glen Drinkwater, Kerstin Kleese Abstract Scientific facilities, in particular large-scale photon and neutron sources, have demanding requirements to manage the increasing quantities of experimental data they generate in a systematic and secure way. In this paper, we describe the ICAT infrastructure for cataloguing facility generated experimental data which has been in development within STFC and DLS for several years. We consider the factors which have influenced its design and describe its architecture and metadata model, a key tool in the management of data. We go on to give an outline of its current implementation and use, with plans for its future development.

  13. 76 FR 14590 - Defense Federal Acquisition Regulation Supplement; Safety of Facilities, Infrastructure, and...

    Science.gov (United States)

    2011-03-17

    ... makes it unlikely that a small business could afford to sustain the infrastructure required to perform...-AG73 Defense Federal Acquisition Regulation Supplement; Safety of Facilities, Infrastructure, and... facilities, infrastructure, and equipment that are intended for use by military or civilian personnel of the...

  14. Primary health care facility infrastructure and services and the ...

    African Journals Online (AJOL)

    ... Research Council ae Currently from Cape Peninsula University of Technology ... Keywords: primary health care facilities; nutritional status; children; caregivers' rural; South Africa ... underlying causes of malnutrition in children, while poor food quality, .... Information on PHC facility infrastructure and services was obtained.

  15. Analisis Tingkat Pemahaman Konsep Siswa Kelas XI IPA Sman 3 Mataram Menggunakan One Tier Dan Two Tier Test Materi Kelarutan Dan Hasil Kali Kelarutan

    OpenAIRE

    Nabilah, Nabilah; Andayani, Yayuk; Laksmiwati, Dwi

    2013-01-01

    : The objective of this research was to analyzed conceptual understanding level of XI science grade students of SMAN 3 Mataram by used one-tier and two-tier test in solubility and solubility product subject. One-tier test are examined to XI IPA 4 grade students and two-tier test to XI IPA 5 grade students. The results of conceptual understanding using one-tier test (57,4%) are higher than using two-tier test (21,03%). One-tier test only showed the students's conceptual understanding, whereas ...

  16. Effects of a Tier 3 Self-Management Intervention Implemented with and without Treatment Integrity

    Science.gov (United States)

    Lower, Ashley; Young, K. Richard; Christensen, Lynnette; Caldarella, Paul; Williams, Leslie; Wills, Howard

    2016-01-01

    This study investigated the effects of a Tier 3 peer-matching self-management intervention on two elementary school students who had previously been less responsive to Tier 1 and Tier 2 interventions. The Tier 3 self-management intervention, which was implemented in the general education classrooms, included daily electronic communication between…

  17. Rapid assessment of infrastructure of primary health care facilities - a relevant instrument for health care systems management.

    Science.gov (United States)

    Scholz, Stefan; Ngoli, Baltazar; Flessa, Steffen

    2015-05-01

    Health care infrastructure constitutes a major component of the structural quality of a health system. Infrastructural deficiencies of health services are reported in literature and research. A number of instruments exist for the assessment of infrastructure. However, no easy-to-use instruments to assess health facility infrastructure in developing countries are available. Present tools are not applicable for a rapid assessment by health facility staff. Therefore, health information systems lack data on facility infrastructure. A rapid assessment tool for the infrastructure of primary health care facilities was developed by the authors and pilot-tested in Tanzania. The tool measures the quality of all infrastructural components comprehensively and with high standardization. Ratings use a 2-1-0 scheme which is frequently used in Tanzanian health care services. Infrastructural indicators and indices are obtained from the assessment and serve for reporting and tracing of interventions. The tool was pilot-tested in Tanga Region (Tanzania). The pilot test covered seven primary care facilities in the range between dispensary and district hospital. The assessment encompassed the facilities as entities as well as 42 facility buildings and 80 pieces of technical medical equipment. A full assessment of facility infrastructure was undertaken by health care professionals while the rapid assessment was performed by facility staff. Serious infrastructural deficiencies were revealed. The rapid assessment tool proved a reliable instrument of routine data collection by health facility staff. The authors recommend integrating the rapid assessment tool in the health information systems of developing countries. Health authorities in a decentralized health system are thus enabled to detect infrastructural deficiencies and trace the effects of interventions. The tool can lay the data foundation for district facility infrastructure management.

  18. 40 CFR 79.54 - Tier 3.

    Science.gov (United States)

    2010-07-01

    ...) Historical and/or projected production volumes and market distributions; and (iv) Estimated population and/or... areas of concern. (f) General and Pulmonary Toxicity Testing. (1) A potential need for Tier 3 general and/or pulmonary toxicity testing may be indicated if, in comparison with appropriate controls, the...

  19. Cryogenic infrastructure for Fermilab's ILC vertical cavity test facility

    Energy Technology Data Exchange (ETDEWEB)

    Carcagno, R.; Ginsburg, C.; Huang, Y.; Norris, B.; Ozelis, J.; Peterson, T.; Poloubotko, V.; Rabehl, R.; Sylvester, C.; Wong, M.; /Fermilab

    2006-06-01

    Fermilab is building a Vertical Cavity Test Facility (VCTF) to provide for R&D and pre-production testing of bare 9-cell, 1.3-GHz superconducting RF (SRF) cavities for the International Linear Collider (ILC) program. This facility is located in the existing Industrial Building 1 (IB1) where the Magnet Test Facility (MTF) also resides. Helium and nitrogen cryogenics are shared between the VCTF and MTF including the existing 1500-W at 4.5-K helium refrigerator with vacuum pumping for super-fluid operation (125-W capacity at 2-K). The VCTF is being constructed in multiple phases. The first phase is scheduled for completion in mid 2007, and includes modifications to the IB1 cryogenic infrastructure to allow helium cooling to be directed to either the VCTF or MTF as scheduling demands require. At this stage, the VCTF consists of one Vertical Test Stand (VTS) cryostat for the testing of one cavity in a 2-K helium bath. Planning is underway to provide a total of three Vertical Test Stands at VCTF, each capable of accommodating two cavities. Cryogenic infrastructure improvements necessary to support these additional VCTF test stands include a dedicated ambient temperature vacuum pump, a new helium purification skid, and the addition of helium gas storage. This paper describes the system design and initial cryogenic operation results for the first VCTF phase, and outlines future cryogenic infrastructure upgrade plans for expanding to three Vertical Test Stands.

  20. Tier2 Submit Software

    Science.gov (United States)

    Download this tool for Windows or Mac, which helps facilities prepare a Tier II electronic chemical inventory report. The data can also be exported into the CAMEOfm (Computer-Aided Management of Emergency Operations) emergency planning software.

  1. Extending the farm on external sites: the INFN Tier-1 experience

    Science.gov (United States)

    Boccali, T.; Cavalli, A.; Chiarelli, L.; Chierici, A.; Cesini, D.; Ciaschini, V.; Dal Pra, S.; dell'Agnello, L.; De Girolamo, D.; Falabella, A.; Fattibene, E.; Maron, G.; Prosperini, A.; Sapunenko, V.; Virgilio, S.; Zani, S.

    2017-10-01

    The Tier-1 at CNAF is the main INFN computing facility offering computing and storage resources to more than 30 different scientific collaborations including the 4 experiments at the LHC. It is also foreseen a huge increase in computing needs in the following years mainly driven by the experiments at the LHC (especially starting with the run 3 from 2021) but also by other upcoming experiments such as CTA[1] While we are considering the upgrade of the infrastructure of our data center, we are also evaluating the possibility of using CPU resources available in other data centres or even leased from commercial cloud providers. Hence, at INFN Tier-1, besides participating to the EU project HNSciCloud, we have also pledged a small amount of computing resources (˜ 2000 cores) located at the Bari ReCaS[2] for the WLCG experiments for 2016 and we are testing the use of resources provided by a commercial cloud provider. While the Bari ReCaS data center is directly connected to the GARR network[3] with the obvious advantage of a low latency and high bandwidth connection, in the case of the commercial provider we rely only on the General Purpose Network. In this paper we describe the set-up phase and the first results of these installations started in the last quarter of 2015, focusing on the issues that we have had to cope with and discussing the measured results in terms of efficiency.

  2. The ATLAS Tier-0 Overview and operational experience

    CERN Document Server

    Elsing, M; Nairz, A; Negri, G

    2010-01-01

    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several "Full Dress Rehearsals" (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, c...

  3. Facility design philosophy: Tank Waste Remediation System Process support and infrastructure definition

    International Nuclear Information System (INIS)

    Leach, C.E.; Galbraith, J.D.; Grant, P.R.; Francuz, D.J.; Schroeder, P.J.

    1995-11-01

    This report documents the current facility design philosophy for the Tank Waste Remediation System (TWRS) process support and infrastructure definition. The Tank Waste Remediation System Facility Configuration Study (FCS) initially documented the identification and definition of support functions and infrastructure essential to the TWRS processing mission. Since the issuance of the FCS, the Westinghouse Hanford Company (WHC) has proceeded to develop information and requirements essential for the technical definition of the TWRS treatment processing programs

  4. Biosafety level 3 facility: essential infrastructure in biodefense strategy in the Republic of Croatia

    International Nuclear Information System (INIS)

    Cvetko Krajinovic, L.; Markotic, A.

    2009-01-01

    Wide spectrum of microorganisms nowadays present serious health risks to humans and animals and their potential for use as biological weapons has become an important concern for governments and responsible authorities. This has resulted in the implementation of measures (known as biodefense) directed toward containment of potentially harmful biological agents with the purpose to reduce or eliminate hazards to laboratory workers, other persons, and the outside environment. Many of such pathogens are dangerous pathogens which request biosafety level 3 (BSL-3) facility for research and management. Biosafety level 3 comprises the combinations of standard and special microbiological laboratory practices and techniques, safety equipment, and laboratory facilities recommended for work with indigenous or exotic agents that may cause serious or potentially lethal disease through inhalation route exposure. Croatia is endemic for many of these threatening pathogens/diseases (e.g. tularemia, pulmonary and non-pulmonary tuberculosis, brucellosis, Q fever, glanders, melioidosis, typhoid fever, viral hemorrhagic fevers, hepatitis B and C, HIV etc.). Its strategic geographic position and the overall world rise of international trade and travel unlocks the possibility for importing some new microorganisms or even occurrence of an outbreak of totally unknown infectious origin. We, also, cannot exclude the possibility of the so called deliberately emerging microbes used in intentional bioterrorist purposes. However, it is obvious that Croatia needs infrastructure and well trained human capacities on biosafety level 3 to cope with incoming public health challenges and threats. The fundamental objective of the laboratory under which dangerous agents can safely be handled, is surveillance and quick response, as a key elements in controlling of scenarios referred to above. For that purpose, the first BSL-3 facility in Croatia is in the final phase of its reconstruction at the University

  5. Biosafety level 3 facility: essential infrastructure in biodefense strategy in the Republic of Croatia

    Energy Technology Data Exchange (ETDEWEB)

    Cvetko Krajinovic, L; Markotic, A [University Hospital for Infectious Diseases Dr Fran Mihaljevic, Zagreb (Croatia)

    2009-07-01

    Wide spectrum of microorganisms nowadays present serious health risks to humans and animals and their potential for use as biological weapons has become an important concern for governments and responsible authorities. This has resulted in the implementation of measures (known as biodefense) directed toward containment of potentially harmful biological agents with the purpose to reduce or eliminate hazards to laboratory workers, other persons, and the outside environment. Many of such pathogens are dangerous pathogens which request biosafety level 3 (BSL-3) facility for research and management. Biosafety level 3 comprises the combinations of standard and special microbiological laboratory practices and techniques, safety equipment, and laboratory facilities recommended for work with indigenous or exotic agents that may cause serious or potentially lethal disease through inhalation route exposure. Croatia is endemic for many of these threatening pathogens/diseases (e.g. tularemia, pulmonary and non-pulmonary tuberculosis, brucellosis, Q fever, glanders, melioidosis, typhoid fever, viral hemorrhagic fevers, hepatitis B and C, HIV etc.). Its strategic geographic position and the overall world rise of international trade and travel unlocks the possibility for importing some new microorganisms or even occurrence of an outbreak of totally unknown infectious origin. We, also, cannot exclude the possibility of the so called deliberately emerging microbes used in intentional bioterrorist purposes. However, it is obvious that Croatia needs infrastructure and well trained human capacities on biosafety level 3 to cope with incoming public health challenges and threats. The fundamental objective of the laboratory under which dangerous agents can safely be handled, is surveillance and quick response, as a key elements in controlling of scenarios referred to above. For that purpose, the first BSL-3 facility in Croatia is in the final phase of its reconstruction at the University

  6. Multiple tier fuel cycle studies for waste transmutation

    International Nuclear Information System (INIS)

    Hill, R.N.; Taiwo, T.A.; Stillman, J.A.; Graziano, D.J.; Bennett, D.R.; Trellue, H.; Todosow, M.; Halsey, W.G.; Baxter, A.

    2002-01-01

    As part of the U.S. Department of Energy Advanced Accelerator Applications Program, a systems study was conducted to evaluate the transmutation performance of advanced fuel cycle strategies. Three primary fuel cycle strategies were evaluated: dual-tier systems with plutonium separation, dual-tier systems without plutonium separation, and single-tier systems without plutonium separation. For each case, the system mass flow and TRU consumption were evaluated in detail. Furthermore, the loss of materials in fuel processing was tracked including the generation of new waste streams. Based on these results, the system performance was evaluated with respect to several key transmutation parameters including TRU inventory reduction, radiotoxicity, and support ratio. The importance of clean fuel processing (∼0.1% losses) and inclusion of a final tier fast spectrum system are demonstrated. With these two features, all scenarios capably reduce the TRU and plutonium waste content, significantly reducing the radiotoxicity; however, a significant infrastructure (at least 1/10 the total nuclear capacity) is required for the dedicated transmutation system

  7. Contested environmental policy infrastructure: socio-political acceptance of renewable energy, water, and waste facilities

    NARCIS (Netherlands)

    Wolsink, M.

    2010-01-01

    The construction of new infrastructure is hotly contested. This paper presents a comparative study on three environmental policy domains in the Netherlands that all deal with legitimising building and locating infrastructure facilities. Such infrastructure is usually declared essential to

  8. Contested environmental policy infrastructure: Socio-political acceptance of renewable energy, water, and waste facilities

    International Nuclear Information System (INIS)

    Wolsink, Maarten

    2010-01-01

    The construction of new infrastructure is hotly contested. This paper presents a comparative study on three environmental policy domains in the Netherlands that all deal with legitimising building and locating infrastructure facilities. Such infrastructure is usually declared essential to environmental policy and claimed to serve sustainability goals. They are considered to serve (proclaimed) public interests, while the adverse impact or risk that mainly concerns environmental values as well is concentrated at a smaller scale, for example in local communities. The social acceptance of environmental policy infrastructure is institutionally determined. The institutional capacity for learning in infrastructure decision-making processes in the following three domains is compared: 1.The implementation of wind power as a renewable energy innovation; 2.The policy on space-water adaptation, with its claim to implement a new style of management replacing the current practice of focusing on control and 'hard' infrastructure; 3.Waste policy with a focus on sound waste management and disposal, claiming a preference for waste minimization (the 'waste management hierarchy'). All three cases show a large variety of social acceptance issues, where the appraisal of the impact of siting the facilities is confronted with the desirability of the policies. In dealing with environmental conflict, the environmental capacity of the Netherlands appears to be low. The policies are frequently hotly contested within the process of infrastructure decision-making. Decision-making on infrastructure is often framed as if consensus about the objectives of environmental policies exists. These claims are not justified, and therefore stimulating the emergence of environmental conflicts that discourage social acceptance of the policies. Authorities are frequently involved in planning infrastructure that conflicts with their officially proclaimed policy objectives. In these circumstances, they are

  9. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, S; Berzano, D; Brunetti, R; Lusso, S; Vallero, S

    2014-01-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  10. Tier 3 multidisciplinary medical weight management improves outcome of Roux-en-Y gastric bypass surgery.

    Science.gov (United States)

    Patel, P; Hartland, A; Hollis, A; Ali, R; Elshaw, A; Jain, S; Khan, A; Mirza, S

    2015-04-01

    In 2013 the Department of Health specified eligibility for bariatric surgery funded by the National Health Service. This included a mandatory specification that patients first complete a Tier 3 medical weight management programme. The clinical effectiveness of this recommendation has not been evaluated previously. Our bariatric centre has provided a Tier 3 programme six months prior to bariatric surgery since 2009. The aim of our retrospective study was to compare weight loss in two cohorts: Roux-en-Y gastric bypass only (RYGB only cohort) versus Tier 3 weight management followed by RYGB (Tier 3 cohort). A total of 110 patients were selected for the study: 66 in the RYGB only cohort and 44 in the Tier 3 cohort. Patients in both cohorts were matched for age, sex, preoperative body mass index and pre-existing co-morbidities. The principal variable was therefore whether they undertook the weight management programme prior to RYGB. Patients from both cohorts were followed up at 6 and 12 months to assess weight loss. The mean weight loss at 6 months for the Tier 3 cohort was 31% (range: 18-69%, standard deviation [SD]: 0.10 percentage points) compared with 23% (range: 4-93%, SD: 0.12 percentage points) for the RYGB only cohort (p=0.0002). The mean weight loss at 12 months for the Tier 3 cohort was 34% (range: 17-51%, SD: 0.09 percentage points) compared with 27% (range: 14-48%, SD: 0.87 percentage points) in the RYGB only cohort (p=0.0037). Our study revealed that in our matched cohorts, patients receiving Tier 3 specialist medical weight management input prior to RYGB lost significantly more weight at 6 and 12 months than RYGB only patients. This confirms the clinical efficacy of such a weight management programme prior to gastric bypass surgery and supports its inclusion in eligibility criteria for bariatric surgery.

  11. Multiple Tier Fuel Cycle Studies for Waste Transmutation

    International Nuclear Information System (INIS)

    Hill, R.N.; Taiwo, T.A.; Stillman, J.A.; Graziano, D.J.; Bennett, D.R.; Trellue, H.; Todosow, M.; Halsey, W.G.; Baxter, A.

    2002-01-01

    As part of the U.S. Department of Energy Advanced Accelerator Applications Program, a systems study was conducted to evaluate the transmutation performance of advanced fuel cycle strategies. Three primary fuel cycle strategies were evaluated: dual-tier systems with plutonium separation, dual-tier systems without plutonium separation, and single-tier systems without plutonium separation. For each case, the system mass flow and TRU consumption were evaluated in detail. Furthermore, the loss of materials in fuel processing was tracked including the generation of new waste streams. Based on these results, the system performance was evaluated with respect to several key transmutation parameters including TRU inventory reduction, radiotoxicity, and support ratio. The importance of clean fuel processing (∼0.1% losses) and inclusion of a final tier fast spectrum system are demonstrated. With these two features, all scenarios capably reduce the TRU and plutonium waste content, significantly reducing the radiotoxicity; however, a significant infrastructure (at least 1/10 the total nuclear capacity) is required for the dedicated transmutation system. (authors)

  12. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    International Nuclear Information System (INIS)

    Donvito, Giacinto; Italiano, Alessandro; Salomoni, Davide

    2014-01-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post

  13. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    Science.gov (United States)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post

  14. The ATLAS Tier-0: Overview and operational experience

    International Nuclear Information System (INIS)

    Elsing, Markus; Goossens, Luc; Nairz, Armin; Negri, Guido

    2010-01-01

    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several 'Full Dress Rehearsals' (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, commissioning, and FDR exercises during the past year. And it will give an outlook on planned developments and the evolution of the system towards first collision data taking expected now in late Autumn 2009.

  15. The ATLAS Tier-0: Overview and operational experience

    Science.gov (United States)

    Elsing, Markus; Goossens, Luc; Nairz, Armin; Negri, Guido

    2010-04-01

    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several "Full Dress Rehearsals" (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, commissioning, and FDR exercises during the past year. And it will give an outlook on planned developments and the evolution of the system towards first collision data taking expected now in late Autumn 2009.

  16. Learning Method, Facilities And Infrastructure, And Learning Resources In Basic Networking For Vocational School

    OpenAIRE

    Pamungkas, Bian Dwi

    2017-01-01

    This study aims to examine the contribution of learning methods on learning output, the contribution of facilities and infrastructure on output learning, the contribution of learning resources on learning output, and the contribution of learning methods, the facilities and infrastructure, and learning resources on learning output. The research design is descriptive causative, using a goal-oriented assessment approach in which the assessment focuses on assessing the achievement of a goal. The ...

  17. Developing the Capacity to Implement Tier 2 and Tier 3 Supports: How Do We Support Our Faculty and Staff in Preparing for Sustainability?

    Science.gov (United States)

    Oakes, Wendy Peia; Lane, Kathleen Lynne; Germer, Kathryn A.

    2014-01-01

    School-site and district-level leadership teams rely on the existing knowledge base to select, implement, and evaluate evidence-based practices meeting students' multiple needs within the context of multitiered systems of support. The authors focus on the stages of implementation science as applied to Tier 2 and Tier 3 supports; the…

  18. CMS tier structure and operation of the experiment-specific tasks in Germany

    International Nuclear Information System (INIS)

    Nowack, A

    2008-01-01

    In Germany, several university institutes and research centres take part in the CMS experiment. Concerning the data analysis, a couple of computing centres at different Tier levels, ranging from Tier 1 to Tier 3, exists at these places. The German Tier 1 centre GridKa at the research centre at Karlsruhe serves all four LHC experiments as well as four non-LHC experiments. With respect to the CMS experiment, GridKa is mainly involved in central tasks. The Tier 2 centre in Germany consists of two sites, one at the research centre DESY at Hamburg and one at RWTH Aachen University, forming a federated Tier 2 centre. Both parts cover different aspects of a Tier 2 centre. The German Tier 3 centres are located at the research centre DESY at Hamburg, at RWTH Aachen University, and at the University of Karlsruhe. Furthermore the building of a German user analysis facility is planned. Since the CMS community in German is rather small, a good cooperation between the different sites is essential. This cooperation includes physical topics as well as technical and operational issues. All available communication channels such as email, phone, monthly video conferences, and regular personal meetings are used. For example, the distribution of data sets is coordinated globally within Germany. Also the CMS-specific services such as the data transfer tool PhEDEx or the Monte Carlo production are operated by people from different sites in order to spread the knowledge widely and increase the redundancy in terms of operators

  19. Multi-Dimensional Optimization for Cloud Based Multi-Tier Applications

    Science.gov (United States)

    Jung, Gueyoung

    2010-01-01

    Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these…

  20. Administrative limits for tritium concentrations found in non-potable groundwater at nuclear power facilities

    International Nuclear Information System (INIS)

    Parker, R.; Hart, D.; WIllert, C.

    2012-01-01

    Currently, there is a regulatory limit available for tritium in drinking water, but no such limit for non-potable groundwater. Voluntary administrative limits for site groundwater may be established at nuclear power facilities to ensure minimal risk to human health and the environment, and provide guidance for investigation or other actions intended to prevent exceedances of future regulatory or guideline limits. This work presents a streamlined approach for nuclear power facilities to develop three tiers of administrative limits for tritium in groundwater so that facilities can identify abnormal/uncontrolled releases of tritium at an early stage, and take appropriate actions to investigate, control, and protect groundwater. Tier 1 represents an upper limit of background, Tier 2 represents a level between background and Tier 3, and Tier 3 represents a risk-based concentration protective of down-gradient receptors. (author)

  1. The JINR Tier1 Site Simulation for Research and Development Purposes

    Directory of Open Access Journals (Sweden)

    Korenkov V.

    2016-01-01

    A system for grid and cloud services simulation is developed at LIT (JINR, Dubna. This simulation system is focused on improving the effciency of the grid/cloud structures development by using the work quality indicators of some real system. The development of such kind of software is very important for making a new grid/cloud infrastructure for such big scientific experiments like the JINR Tier1 site for WLCG. The simulation of some processes of the Tier1 site is considered as an example of our application approach.

  2. Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures

    International Nuclear Information System (INIS)

    Field, L; Gronager, M; Johansson, D; Kleist, J

    2010-01-01

    Interoperability of grid infrastructures is becoming increasingly important in the emergence of large scale grid infrastructures based on national and regional initiatives. To achieve interoperability of grid infrastructures adaptions and bridging of many different systems and services needs to be tackled. A grid infrastructure offers services for authentication, authorization, accounting, monitoring, operation besides from the services for handling and data and computations. This paper presents an outline of the work done to integrate the Nordic Tier-1 and 2s, which for the compute part is based on the ARC middleware, into the WLCG grid infrastructure co-operated by the EGEE project. Especially, a throughout description of integration of the compute services is presented.

  3. CLIC Test Facility 3

    CERN Multimedia

    Kossyvakis, I; Faus-golfe, A

    2007-01-01

    The design of CLIC is based on a two-beam scheme, where short pulses of high power 30 GHz RF are extracted from a drive beam running parallel to the main beam. The 3rd generation CLIC Test Facility (CTF3) will demonstrate the generation of the drive beam with the appropriate time structure, the extraction of 30 GHz RF power from this beam, as well as acceleration of a probe beam with 30 GHz RF cavities. The project makes maximum use of existing equipment and infrastructure of the LPI complex, which became available after the closure of LEP.

  4. Proposed Tier 2 Screening Criteria and Tier 3 Field Procedures for Evaluation of Vapor Intrusion (ESTCP Cost and Performance Report)

    Science.gov (United States)

    2012-08-01

    Security Technology Certification Program ETV Environmental Technology Verification GC gas chromatography HGL HydroGeoLogic, Inc . ITRC... Inc . (HGL) for invaluable project support. This page left blank intentionally. 1 1.0 EXECUTIVE SUMMARY 1.1 OBJECTIVES OF THE... NIKE Battery Site PR-58 N. Kingstown, RI Tier 2 Industrial Site Southeast TX Tier 2 Note: * = Tier 2 demonstration not completed due to the

  5. How Infrastructure Investments Support the U.S. Economy

    OpenAIRE

    Robert Pollin; James Heintz; Heidi Garrett-Peltier

    2009-01-01

    The U.S. system of public infrastructure has deteriorated badly over the past generation. The breaching of New Orleans’ water levees in 2005 and the collapse of the I-35W bridge in Minneapolis in 2007 offered tragic testimony to this long-acknowledged reality. The project of rebuilding our infrastructure now needs to be embraced as a first-tier economic policy priority, and not simply to prevent repetitions of the disasters in New Orleans and Minneapolis. Infrastructure investments—particular...

  6. MODEL OF FEES CALCULATION FOR ACCESS TO TRACK INFRASTRUCTURE FACILITIES

    Directory of Open Access Journals (Sweden)

    M. I. Mishchenko

    2014-12-01

    Full Text Available Purpose. The purpose of the article is to develop a one- and two-element model of the fees calculation for the use of track infrastructure of Ukrainian railway transport. Methodology. On the basis of this one can consider that when planning the planned preventive track repair works and the amount of depreciation charges the guiding criterion is not the amount of progress it is the operating life of the track infrastructure facilities. The cost of PPTRW is determined on the basis of the following: the classification track repairs; typical technological processes for track repairs; technology based time standards for PPTRW; costs for the work of people, performing the PPTRW, their hourly wage rates according to the Order 98-Ts; the operating cost of machinery; regulated list; norms of expenditures and costs of materials and products (they have the largest share of the costs for repairs; railway rates; average distances for transportation of materials used during repair; standards of general production expenses and the administrative costs. Findings. The models offered in article allow executing the objective account of expenses in travelling facilities for the purpose of calculation of the proved size of indemnification and necessary size of profit, the sufficient enterprises for effective activity of a travelling infrastructure. Originality. The methodological bases of determination the fees (payments for the use of track infrastructure on one- and two-element base taking into account the experience of railways in the EC countries and the current transport legislation were grounded. Practical value. The article proposes the one- and two-element models of calculating the fees (payments for the TIF use, accounting the applicable requirements of European transport legislation, which provides the expense compensation and income formation, sufficient for economic incentives of the efficient operation of the TIE of Ukrainian railway transport.

  7. Tier 3 batch system data locality via managed caches

    Science.gov (United States)

    Fischer, Max; Giffels, Manuel; Jung, Christopher; Kühn, Eileen; Quast, Günter

    2015-05-01

    Modern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance with HEP Tier 3 activities, the design incorporates two major assumptions: First, only a fraction of data is accessed regularly and thus the deciding factor for overall throughput. Second, data access may fallback to non-local, making permanent local data availability an inefficient resource usage strategy. Based on this, the HPDA design generically extends available storage hierarchies into the batch system. Using the batch system itself for scheduling file locality, an array of independent caches on the worker nodes is dynamically populated with high-profile data. Cache state information is exposed to the batch system both for managing caches and scheduling jobs. As a result, users directly work with a regular, adequately sized storage system. However, their automated batch processes are presented with local replications of data whenever possible.

  8. Europlanet Research Infrastructure: Planetary Simulation Facilities

    Science.gov (United States)

    Davies, G. R.; Mason, N. J.; Green, S.; Gómez, F.; Prieto, O.; Helbert, J.; Colangeli, L.; Srama, R.; Grande, M.; Merrison, J.

    2008-09-01

    EuroPlanet The Europlanet Research Infrastructure consortium funded under FP7 aims to provide the EU Planetary Science community greater access for to research infrastructure. A series of networking and outreach initiatives will be complimented by joint research activities and the formation of three Trans National Access distributed service laboratories (TNA's) to provide a unique and comprehensive set of analogue field sites, laboratory simulation facilities, and extraterrestrial sample analysis tools. Here we report on the infrastructure that comprises the second TNA; Planetary Simulation Facilities. 11 laboratory based facilities are able to recreate the conditions found in the atmospheres and on the surfaces of planetary systems with specific emphasis on Martian, Titan and Europa analogues. The strategy has been to offer some overlap in capabilities to ensure access to the highest number of users and to allow for progressive and efficient development strategies. For example initial testing of mobility capability prior to the step wise development within planetary atmospheres that can be made progressively more hostile through the introduction of extreme temperatures, radiation, wind and dust. Europlanet Research Infrastructure Facilties: Mars atmosphere simulation chambers at VUA and OU These relatively large chambers (up to 1 x 0.5 x 0.5 m) simulate Martian atmospheric conditions and the dual cooling options at VUA allows stabilised instrument temperatures while the remainder of the sample chamber can be varied between 220K and 350K. Researchers can therefore assess analytical protocols for instruments operating on Mars; e.g. effect of pCO2, temperature and material (e.g., ± ice) on spectroscopic and laser ablation techniques while monitoring the performance of detection technologies such as CCD at low T & variable p H2O & pCO2. Titan atmosphere and surface simulation chamber at OU The chamber simulates Titan's atmospheric composition under a range of

  9. Experience building and operating the CMS Tier-1 computing centres

    Science.gov (United States)

    Albert, M.; Bakken, J.; Bonacorsi, D.; Brew, C.; Charlot, C.; Huang, Chih-Hao; Colling, D.; Dumitrescu, C.; Fagan, D.; Fassi, F.; Fisk, I.; Flix, J.; Giacchetti, L.; Gomez-Ceballos, G.; Gowdy, S.; Grandi, C.; Gutsche, O.; Hahn, K.; Holzman, B.; Jackson, J.; Kreuzer, P.; Kuo, C. M.; Mason, D.; Pukhaeva, N.; Qin, G.; Quast, G.; Rossman, P.; Sartirana, A.; Scheurer, A.; Schott, G.; Shih, J.; Tader, P.; Thompson, R.; Tiradani, A.; Trunov, A.

    2010-04-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  10. Experience building and operating the CMS Tier-1 computing centres

    International Nuclear Information System (INIS)

    Albert, M; Bakken, J; Huang, Chih-Hao; Dumitrescu, C; Fagan, D; Fisk, I; Giacchetti, L; Gutsche, O; Holzman, B; Bonacorsi, D; Grandi, C; Brew, C; Jackson, J; Charlot, C; Colling, D; Fassi, F; Flix, J; Gomez-Ceballos, G; Hahn, K; Gowdy, S

    2010-01-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  11. Spanish ATLAS Tier-1 &Tier-2 perspective on computing over the next years

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration

    2018-01-01

    Since the beginning of the WLCG Project the Spanish ATLAS computer centres have contributed with reliable and stable resources as well as personnel for the ATLAS Collaboration. Our contribution to the ATLAS Tier2s and Tier1s computing resources (disk and CPUs) in the last 10 years has been around 5%, even though the Spanish contribution to the ATLAS detector construction as well as the number of authors are both close to 3%. In 2015 an international advisory committee recommended to revise our contribution according to the participation in the ATLAS experiment. With this scenario, we are optimising the federation of three sites located in Barcelona, Madrid and Valencia, taking into account that the ATLAS collaboration has developed workflows and tools to flexibly use all the resources available to the collaboration, where the Tiered structure is somehow vanishing. In this contribution, we would like to show the evolution and technical updates in the ATLAS Spanish Federated Tier2 and Tier1. Some developments w...

  12. Regulatory Compliance in Multi-Tier Supplier Networks

    Science.gov (United States)

    Goossen, Emray R.; Buster, Duke A.

    2014-01-01

    Over the years, avionics systems have increased in complexity to the point where 1st tier suppliers to an aircraft OEM find it financially beneficial to outsource designs of subsystems to 2nd tier and at times to 3rd tier suppliers. Combined with challenging schedule and budgetary pressures, the environment in which safety-critical systems are being developed introduces new hurdles for regulatory agencies and industry. This new environment of both complex systems and tiered development has raised concerns in the ability of the designers to ensure safety considerations are fully addressed throughout the tier levels. This has also raised questions about the sufficiency of current regulatory guidance to ensure: proper flow down of safety awareness, avionics application understanding at the lower tiers, OEM and 1st tier oversight practices, and capabilities of lower tier suppliers. Therefore, NASA established a research project to address Regulatory Compliance in a Multi-tier Supplier Network. This research was divided into three major study efforts: 1. Describe Modern Multi-tier Avionics Development 2. Identify Current Issues in Achieving Safety and Regulatory Compliance 3. Short-term/Long-term Recommendations Toward Higher Assurance Confidence This report presents our findings of the risks, weaknesses, and our recommendations. It also includes a collection of industry-identified risks, an assessment of guideline weaknesses related to multi-tier development of complex avionics systems, and a postulation of potential modifications to guidelines to close the identified risks and weaknesses.

  13. Development of Infrastructure Facilities for Superconducting RF Cavity Fabrication, Processing and 2 K Characterization at RRCAT

    Science.gov (United States)

    Joshi, S. C.; Raghavendra, S.; Jain, V. K.; Puntambekar, A.; Khare, P.; Dwivedi, J.; Mundra, G.; Kush, P. K.; Shrivastava, P.; Lad, M.; Gupta, P. D.

    2017-02-01

    An extensive infrastructure facility is being established at Raja Ramanna Centre for Advanced Technology (RRCAT) for a proposed 1 GeV, high intensity superconducting proton linac for Indian Spallation Neutron Source. The proton linac will comprise of a large number of superconducting Radio Frequency (SCRF) cavities ranging from low beta spoke resonators to medium and high beta multi-cell elliptical cavities at different RF frequencies. Infrastructure facilities for SCRF cavity fabrication, processing and performance characterization at 2 K are setup to take-up manufacturing of large number of cavities required for future projects of Department of Atomic Energy (DAE). RRCAT is also participating in a DAE’s approved mega project on “Physics and Advanced technology for High intensity Proton Accelerators” under Indian Institutions-Fermilab Collaboration (IIFC). In the R&D phase of IIFC program, a number of high beta, fully dressed multi-cell elliptical SCRF cavities will be developed in collaboration with Fermilab. A dedicated facility for SCRF cavity fabrication, tuning and processing is set up. SCRF cavities developed will be characterized at 2K using a vertical test stand facility, which is already commissioned. A Horizontal Test Stand facility has also been designed and under development for testing a dressed multi-cell SCRF cavity at 2K. The paper presents the infrastructure facilities setup at RRCAT for SCRF cavity fabrication, processing and testing at 2K.

  14. Screening-level Biomonitoring Equivalents for tiered interpretation of urinary 3-phenoxybenzoic acid (3-PBA) in a risk assessment context.

    Science.gov (United States)

    Aylward, Lesa L; Irwin, Kim; St-Amand, Annie; Nong, Andy; Hays, Sean M

    2018-02-01

    3-Phenoxybenzoic acid (3-PBA) is a common metabolite of several pyrethroid pesticides of differing potency and also occurs as a residue in foods resulting from environmental degradation of parent pyrethroid compounds. Thus, 3-PBA in urine is not a specific biomarker of exposure to a particular pyrethroid. However, an approach derived from the use of Biomonitoring Equivalents (BEs) can be used to estimate a conservative initial screening value for a tiered assessment of population data on 3-PBA in urine. A conservative generic urinary excretion fraction for 3-PBA was estimated from data for five pyrethroid compounds with human data. Estimated steady-state urinary 3-PBA concentrations associated with reference doses and acceptable daily intakes for each of the nine compounds ranged from 1.7 μg/L for cyhalothrin and deltamethrin to 520 μg/L for permethrin. The lower value can be used as a highly conservative Tier 1 screening value for assessment of population urinary 3-PBA data. A second tier screening value of 87 μg/L was derived based on weighting by relative exposure estimates for the different pyrethroid compounds, to be applied as part of the data evaluation process if biomonitoring data exceed the Tier 1 value. These BE values are most appropriately used to evaluate the central tendency of population biomarker concentration data in a risk assessment context. The provisional BEs were compared to available national biomonitoring data from the US and Canada. Copyright © 2017. Published by Elsevier Inc.

  15. Bayesian updating of reliability of civil infrastructure facilities based on condition-state data and fault-tree model

    International Nuclear Information System (INIS)

    Ching Jianye; Leu, S.-S.

    2009-01-01

    This paper considers a difficult but practical circumstance of civil infrastructure management-deterioration/failure data of the infrastructure system are absent while only condition-state data of its components are available. The goal is to develop a framework for estimating time-varying reliabilities of civil infrastructure facilities under such a circumstance. A novel method of analyzing time-varying condition-state data that only reports operational/non-operational status of the components is proposed to update the reliabilities of civil infrastructure facilities. The proposed method assumes that the degradation arrivals can be modeled as a Poisson process with unknown time-varying arrival rate and damage impact and that the target system can be represented as a fault-tree model. To accommodate large uncertainties, a Bayesian algorithm is proposed, and the reliability of the infrastructure system can be quickly updated based on the condition-state data. Use of the new method is demonstrated with a real-world example of hydraulic spillway gate system.

  16. Elastic extension of a local analysis facility on external clouds for the LHC experiments

    Science.gov (United States)

    Ciaschini, V.; Codispoti, G.; Rinaldi, L.; Aiftimiei, D. C.; Bonacorsi, D.; Calligola, P.; Dal Pra, S.; De Girolamo, D.; Di Maria, R.; Grandi, C.; Michelotto, D.; Panella, M.; Taneja, S.; Semeria, F.

    2017-10-01

    The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, the LHC experiments are exploring the opportunity to access Cloud resources provided by external partners or commercial providers. In this work we present the proof of concept of the elastic extension of a local analysis facility, specifically the Bologna Tier-3 Grid site, for the LHC experiments hosted at the site, on an external OpenStack infrastructure. We focus on the Cloud Bursting of the Grid site using DynFarm, a newly designed tool that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on an OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage.

  17. Site in a box: Improving the Tier 3 experience

    Science.gov (United States)

    Dost, J. M.; Fajardo, E. M.; Jones, T. R.; Martin, T.; Tadel, A.; Tadel, M.; Würthwein, F.

    2017-10-01

    The Pacific Research Platform is an initiative to interconnect Science DMZs between campuses across the West Coast of the United States over a 100 gbps network. The LHC @ UC is a proof of concept pilot project that focuses on interconnecting 6 University of California campuses. It is spearheaded by computing specialists from the UCSD Tier 2 Center in collaboration with the San Diego Supercomputer Center. A machine has been shipped to each campus extending the concept of the Data Transfer Node to a cluster in a box that is fully integrated into the local compute, storage, and networking infrastructure. The node contains a full HTCondor batch system, and also an XRootD proxy cache. User jobs routed to the DTN can run on 40 additional slots provided by the machine, and can also flock to a common GlideinWMS pilot pool, which sends jobs out to any of the participating UCs, as well as to Comet, the new supercomputer at SDSC. In addition, a common XRootD federation has been created to interconnect the UCs and give the ability to arbitrarily export data from the home university, to make it available wherever the jobs run. The UC level federation also statically redirects to either the ATLAS FAX or CMS AAA federation respectively to make globally published datasets available, depending on end user VO membership credentials. XRootD read operations from the federation transfer through the nearest DTN proxy cache located at the site where the jobs run. This reduces wide area network overhead for subsequent accesses, and improves overall read performance. Details on the technical implementation, challenges faced and overcome in setting up the infrastructure, and an analysis of usage patterns and system scalability will be presented.

  18. The ITER neutral beam test facility: Designs of the general infrastructure, cryosystem and cooling plant

    International Nuclear Information System (INIS)

    Cordier, J.J.; Hemsworth, R.; Chantant, M.; Gravil, B.; Henry, D.; Sabathier, F.; Doceul, L.; Thomas, E.; Houtte, D. van; Zaccaria, P.; Antoni, V.; Bello, S. Dal; Marcuzzi, D.; Antipenkov, A.; Day, C.; Dremel, M.; Mondino, P.L.

    2005-01-01

    The CEA Association is involved, in close collaboration with ENEA, FZK, IPP and UKAEA European Associations, in the first ITER neutral beam (NB) injector and the ITER neutral beam test facility design (EFDA task ref. TW3-THHN-IITF1). A total power of about 50 MW will have to be removed in steady state on the neutral beam test facility (NBTF). The main purpose of this task is to make progress with the detailed design of the first ITER NB injector and to start the conceptual design of the ITER NBTF. The general infrastructure layout of a generic site for the NBTF includes the test facility itself equipped with a dedicated beamline vessel [P.L. Zaccaria, et al., Maintenance schemes for the ITER neutral beam test facility, this conference] and integration studies of associated auxiliaries such as cooling plant, cryoplant and forepumping system

  19. One-tiered vs. two-tiered forecasting of South African seasonal rainfall

    CSIR Research Space (South Africa)

    Landman, WA

    2010-09-01

    Full Text Available -tiered Forecasting of South African Seasonal Rainfall Willem A. Landman1, Dave DeWitt2 and Daleen L?tter3 1: Council for Scientific and Industrial Research; WALandman@csir.co.za 2: International Research Institute for Climate and Society; Daved... modelled as fully interacting is called a fully coupled model system. Forecast performance by such systems predicting seasonal rainfall totals over South Africa is compared with forecasts produced by a computationally less demanding two-tiered system...

  20. Using Brief Experimental Analysis to Intensify Tier 3 Reading Interventions

    Science.gov (United States)

    Coolong-Chaffin, Melissa; Wagner, Dana

    2015-01-01

    As implementation of multi-tiered systems of support becomes common practice across the nation, practitioners continue to need strategies for intensifying interventions and supports for the subset of students who fail to make adequate progress despite strong programs at Tiers 1 and 2. Experts recommend making several changes to the structure and…

  1. Understanding the potential of facilities managers to be advocates for energy efficiency retrofits in mid-tier commercial office buildings

    International Nuclear Information System (INIS)

    Curtis, Jim; Walton, Andrea; Dodd, Michael

    2017-01-01

    Realising energy efficiency opportunities in new commercial office buildings is an easier task than retrofitting older, mid-tier building stock. As a result, a number of government programs aim to support retrofits by offering grants, upgrades, and energy audits to facilitate energy efficiency opportunities. This study reports on a state government program in Victoria, Australia, where the uptake of such offerings was lower than expected, prompting the program team to consider whether targeting facilities managers (FMs), rather than building owners, might be a better way of delivering the program. The influences and practices of FMs that impact on their ability to be advocates for energy efficiency were explored. The results revealed that complex building ownership arrangements, poor communication skills, isolation from key decision making processes, a lack of credible business cases and information, split incentives, and the prospect of business disruptions can all impact on FMs’ ability to drive organizational change. Future program efforts should continue to interrogate the social context of retrofits in mid-tier buildings, including other influences and influencers beyond FMs, and adapt accordingly. - Highlights: • Energy efficiency retrofits of older commercial buildings can be a challenge. • Government support for retrofits is not always taken up by building owners. • Targeting facilities managers (FMs) to encourage retrofits is proposed. • FMs’ ability to be advocates for energy efficiency is constrained. • Government offerings need to better fit with the realities of the problem.

  2. The IceCube Computing Infrastructure Model

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.

  3. Management and Development of the RT Research Facilities and Infrastructures

    International Nuclear Information System (INIS)

    Kim, Won Ho; Nho, Young Chang; Kim, Jae Sung

    2009-01-01

    The purpose of this project are to operate the core facilities of the research for the Radiation Technology in stable and to assist the research activities efficiently in the industry, academic, and research laboratory. By developing the infrastructure of the national radio technology industry, we can activate the researching area of the RT and the related industry, and obtain the primary and original technology. The key point in the study of the RT and the assistance of the industry, academic, and research laboratory for the RT area smoothly, is managing the various of unique radiation facilities in our country. The gamma Phytotron and Gene Bank are essential in the agribiology because these facilities are used to preserve and utilize the genes and to provide an experimental field for the environment and biotechnology. The Radiation Fusion Technology research supporting facilities are the core support facilities, and are used to develop the high-tech fusion areas. In addition, the most advanced analytical instruments, whose costs are very high, should be managed in stable and be utilized in supporting works, and the experimental animal supporting laboratory and Gamma Cell have to be maintained in high level and managed in stable also. The ARTI have been developed the 30MeV cyclotron during 2005∼2006, aimed to produce radioisotopes and to research the beam applications as a result of the project, 'Establishment of the Infrastructure for the Atomic Energy Research Expansion', collaborated with the Korea Institute of Radiological and Medical Sciences. In addition, the ARTI is in the progress of establishing cyclotron integrated complex as a core research facility, using a proton beam to produce radioisotopes and to support a various research areas. The measurement and evaluation of the irradiation dose, and irradiation supporting technology of the Good Irradiation Practice(GIP) are essential in various researching areas. One thing to remember is that the publicity

  4. The Impact Imperative: A Space Infrastructure Enabling a Multi-Tiered Earth Defense

    Science.gov (United States)

    Campbell, Jonathan W.; Phipps, Claude; Smalley, Larry; Reilly, James; Boccio, Dona

    2003-01-01

    Impacting at hypervelocity, an asteroid struck the Earth approximately 65 million years ago in the Yucatan Peninsula a m . This triggered the extinction of almost 70% of the species of life on Earth including the dinosaurs. Other impacts prior to this one have caused even greater extinctions. Preventing collisions with the Earth by hypervelocity asteroids, meteoroids, and comets is the most important immediate space challenge facing human civilization. This is the Impact Imperative. We now believe that while there are about 2000 earth orbit crossing rocks greater than 1 kilometer in diameter, there may be as many as 200,000 or more objects in the 100 m size range. Can anything be done about this fundamental existence question facing our civilization? The answer is a resounding yes! By using an intelligent combination of Earth and space based sensors coupled with an infrastructure of high-energy laser stations and other secondary mitigation options, we can deflect inbound asteroids, meteoroids, and comets and prevent them &om striking the Earth. This can be accomplished by irradiating the surface of an inbound rock with sufficiently intense pulses so that ablation occurs. This ablation acts as a small rocket incrementally changing the shape of the rock's orbit around the Sun. One-kilometer size rocks can be moved sufficiently in about a month while smaller rocks may be moved in a shorter time span. We recommend that space objectives be immediately reprioritized to start us moving quickly towards an infrastructure that will support a multiple option defense capability. Planning and development for a lunar laser facility should be initiated immediately in parallel with other options. All mitigation options are greatly enhanced by robust early warning, detection, and tracking resources to find objects sufficiently prior to Earth orbit passage in time to allow significant intervention. Infrastructure options should include ground, LEO, GEO, Lunar, and libration point

  5. 78 FR 32223 - Control of Air Pollution From Motor Vehicles: Tier 3 Motor Vehicle Emission and Fuel Standards

    Science.gov (United States)

    2013-05-29

    ...-OAR-2011-0135; FRL-9818-5] RIN 2060-A0 Control of Air Pollution From Motor Vehicles: Tier 3 Motor... extension of the public comment period for the proposed rule ``Control of Air Pollution from Motor Vehicles: Tier 3 Motor Vehicle Emission and Fuel Standards'' (the proposed rule is hereinafter referred to as...

  6. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    Science.gov (United States)

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing 600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and Implementing a single Tier-3/community laboratory to extend and improve delivery of services in Pixley-ka-Seme, with an estimated local ∼12–24-hour LTR-TAT, is ∼$2 more than existing referred services per-test, but 2–4 fold cheaper than implementing eight Tier-2/POC-hubs or providing twenty-seven Tier-1/POCT CD4 services. PMID:25517412

  7. SNL Five-Year Facilities & Infrastructure Plan FY2015-2019

    Energy Technology Data Exchange (ETDEWEB)

    Cipriani, Ralph J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    Sandia’s development vision is to provide an agile, flexible, safer, more secure, and efficient enterprise that leverages the scientific and technical capabilities of the workforce and supports national security requirements in multiple areas. Sandia’s Five-Year Facilities & Infrastructure Planning program represents a tool to budget and prioritize immediate and short-term actions from indirect funding sources in light of the bigger picture of proposed investments from direct-funded, Work for Others and other funding sources. As a complementary F&I investment program, Sandia’s indirect investment program supports incremental achievement of the development vision within a constrained resource environment.

  8. Importance for Municipalities of Infrastructure Information Systems in Turkey

    Directory of Open Access Journals (Sweden)

    Kamil KARATAS

    2017-08-01

    Full Text Available Technical infrastructures are the important development-level parameters of countries, difficult to maintain and require high-investment cost. It is required to take the advantage of information system for the better administration of technical infrastructure facilities, planning and taking effective decisions. Hence, infrastructure information systems must be built oriented to technical infrastructure (TI. In this study, Kunduracilar Street in Trabzon was selected as pilot area oriented to urban TI studies. Graphic and attribute information of the pilot area were collected. Every TI facility was arranged into the same coordinate system with different layers. Maps showing TI facilities in the pilot area and 3D view of the site were prepared on ArcGIS software.

  9. Tier identification (TID) for tiered memory characteristics

    Science.gov (United States)

    Chang, Jichuan; Lim, Kevin T; Ranganathan, Parthasarathy

    2014-03-25

    A tier identification (TID) is to indicate a characteristic of a memory region associated with a virtual address in a tiered memory system. A thread may be serviced according to a first path based on the TID indicating a first characteristic. The thread may be serviced according to a second path based on the TID indicating a second characteristic.

  10. Changing the batch system in a Tier 1 computing center: why and how

    Science.gov (United States)

    Chierici, Andrea; Dal Pra, Stefano

    2014-06-01

    At the Italian Tierl Center at CNAF we are evaluating the possibility to change the current production batch system. This activity is motivated mainly because we are looking for a more flexible licensing model as well as to avoid vendor lock-in. We performed a technology tracking exercise and among many possible solutions we chose to evaluate Grid Engine as an alternative because its adoption is increasing in the HEPiX community and because it's supported by the EMI middleware that we currently use on our computing farm. Another INFN site evaluated Slurm and we will compare our results in order to understand pros and cons of the two solutions. We will present the results of our evaluation of Grid Engine, in order to understand if it can fit the requirements of a Tier 1 center, compared to the solution we adopted long ago. We performed a survey and a critical re-evaluation of our farming infrastructure: many production softwares (accounting and monitoring on top of all) rely on our current solution and changing it required us to write new wrappers and adapt the infrastructure to the new system. We believe the results of this investigation can be very useful to other Tier-ls and Tier-2s centers in a similar situation, where the effort of switching may appear too hard to stand. We will provide guidelines in order to understand how difficult this operation can be and how long the change may take.

  11. Tiered Storage For LHC

    CERN Multimedia

    CERN. Geneva; Hanushevsky, Andrew

    2012-01-01

    For more than a year, the ATLAS Western Tier 2 (WT2) at SLAC National Accelerator has been successfully operating a two tiered storage system based on Xrootd's flexible cross-cluster data placement framework, the File Residency Manager. The architecture allows WT2 to provide both, high performance storage at the higher tier to ATLAS analysis jobs, as well as large, low cost disk capacity at the lower tier. Data automatically moves between the two storage tiers based on the needs of analysis jobs and is completely transparent to the jobs.

  12. Tiers of intervention in kindergarten through third grade.

    Science.gov (United States)

    O'Connor, Rollanda E; Harty, Kristin R; Fulmer, Deborah

    2005-01-01

    This study measured the effects of increasing levels of intervention in reading for a cohort of children in Grades K through 3 to determine whether the severity of reading disability (RD) could be significantly reduced in the catchment schools. Tier 1 consisted of professional development for teachers of reading. The focus of this study is on additional instruction that was provided as early as kindergarten for children whose achievement fell below average. Tier 2 intervention consisted of small-group reading instruction 3 times per week, and Tier 3 of daily instruction delivered individually or in groups of two. A comparison of the reading achievement of third-grade children who were at risk in kindergarten showed moderate to large differences favoring children in the tiered interventions in decoding, word identification, fluency, and reading comprehension.

  13. German contributions to the CMS computing infrastructure

    International Nuclear Information System (INIS)

    Scheurer, A

    2010-01-01

    The CMS computing model anticipates various hierarchically linked tier centres to counter the challenges provided by the enormous amounts of data which will be collected by the CMS detector at the Large Hadron Collider, LHC, at CERN. During the past years, various computing exercises were performed to test the readiness of the computing infrastructure, the Grid middleware and the experiment's software for the startup of the LHC which took place in September 2008. In Germany, several tier sites are set up to allow for an efficient and reliable way to simulate possible physics processes as well as to reprocess, analyse and interpret the numerous stored collision events of the experiment. It will be shown that the German computing sites played an important role during the experiment's preparation phase and during data-taking of CMS and, therefore, scientific groups in Germany will be ready to compete for discoveries in this new era of particle physics. This presentation focuses on the German Tier-1 centre GridKa, located at Forschungszentrum Karlsruhe, the German CMS Tier-2 federation DESY/RWTH with installations at the University of Aachen and the research centre DESY. In addition, various local computing resources in Aachen, Hamburg and Karlsruhe are briefly introduced as well. It will be shown that an excellent cooperation between the different German institutions and physicists led to well established computing sites which cover all parts of the CMS computing model. Therefore, the following topics are discussed and the achieved goals and the gained knowledge are depicted: data management and distribution among the different tier sites, Grid-based Monte Carlo production at the Tier-2 as well as Grid-based and locally submitted inhomogeneous user analyses at the Tier-3s. Another important task is to ensure a proper and reliable operation 24 hours a day, especially during the time of data-taking. For this purpose, the meta-monitoring tool 'HappyFace', which was

  14. Regulation of gas infrastructure expansion

    International Nuclear Information System (INIS)

    De Joode, J.

    2012-01-01

    The topic of this dissertation is the regulation of gas infrastructure expansion in the European Union (EU). While the gas market has been liberalised, the gas infrastructure has largely remained in the regulated domain. However, not necessarily all gas infrastructure facilities - such as gas storage facilities, LNG import terminals and certain gas transmission pipelines - need to be regulated, as there may be scope for competition. In practice, the choice of regulation of gas infrastructure expansion varies among different types of gas infrastructure facilities and across EU Member States. Based on a review of economic literature and on a series of in-depth case studies, this study explains these differences in choices of regulation from differences in policy objectives, differences in local circumstances and differences in the intrinsic characteristics of the infrastructure projects. An important conclusion is that there is potential for a larger role for competition in gas infrastructure expansion.

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  16. SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres

    Science.gov (United States)

    Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei

    2015-10-01

    Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

  17. Water, sanitation and hygiene infrastructure and quality in rural healthcare facilities in Rwanda.

    Science.gov (United States)

    Huttinger, Alexandra; Dreibelbis, Robert; Kayigamba, Felix; Ngabo, Fidel; Mfura, Leodomir; Merryweather, Brittney; Cardon, Amelie; Moe, Christine

    2017-08-03

    WHO and UNICEF have proposed an action plan to achieve universal water, sanitation and hygiene (WASH) coverage in healthcare facilities (HCFs) by 2030. The WASH targets and indicators for HCFs include: an improved water source on the premises accessible to all users, basic sanitation facilities, a hand washing facility with soap and water at all sanitation facilities and patient care areas. To establish viable targets for WASH in HCFs, investigation beyond 'access' is needed to address the state of WASH infrastructure and service provision. Patient and caregiver use of WASH services is largely unaddressed in previous studies despite being critical for infection control. The state of WASH services used by staff, patients and caregivers was assessed in 17 rural HCFs in Rwanda. Site selection was non-random and predicated upon piped water and power supply. Direct observation and semi-structured interviews assessed drinking water treatment, presence and condition of sanitation facilities, provision of soap and water, and WASH-related maintenance and record keeping. Samples were collected from water sources and treated drinking water containers and analyzed for total coliforms, E. coli, and chlorine residual. Drinking water treatment was reported at 15 of 17 sites. Three of 18 drinking water samples collected met the WHO guideline for free chlorine residual of >0.2 mg/l, 6 of 16 drinking water samples analyzed for total coliforms met the WHO guideline of hygienic condition and accessible to patients. Regular maintenance of WASH infrastructure consisted of cleaning; no HCF had on-site capacity for performing repairs. Quarterly evaluations of HCFs for Rwanda's Performance Based Financing system included WASH indicators. All HCFs met national policies for water access, but WHO guidelines for environmental standards including water quality were not fully satisfied. Access to WASH services at the HCFs differed between staff and patients and caregivers.

  18. 3D spatial information infrastructure : The case of Port Rotterdam

    NARCIS (Netherlands)

    Zlatanova, S.; Beetz, J.

    2012-01-01

    The development and maintenance of the infrastructure, facilities, logistics and other assets of the Port of Rotterdam requires a broad spectrum of heterogeneous information. This information concerns features, which are spatially distributed above ground, underground, in the air and in the water.

  19. 3D Spatial Information Infrastructure for the Port of Rotterdam

    NARCIS (Netherlands)

    Zlatanova, S.; Beetz, J.; Boersma, A.J.; Mulder, A.; Goos, J.

    2013-01-01

    The maintenance of the complex infrastructure and facilities of Port of Rotterdam is based on large amounts of heterogeneous information. Almost all activities of the Port require spatial information about features above- and under- ground. Current information systems are department and data

  20. Regulatory measures of BARC Safety Council to control radiation exposure in BARC Facilities

    International Nuclear Information System (INIS)

    Rajdeep; Jolly, V.M.; Jayarajan, K.

    2018-01-01

    Bhabha Atomic Research Centre is involved in multidisciplinary research and developmental activities, related to peaceful use of nuclear energy including societal benefits. BARC facilities at different parts of India include nuclear fuel fabrication facilities, research reactors, nuclear recycle facilities and various Physics, Chemistry and Biological laboratories. BARC Safety Council (BSC) is the regulatory body for BARC facilities and takes regulatory measures for radiation protection. BSC has many safety committees for radiation protection including Operating Plants Safety Review Committee (OPSRC), Committee to Review Applications for Authorization of Safe Disposal of Radioactive Wastes (CRAASDRW) and Design Safety Review Committees (DSRC) in 2 nd tier and Unit Level Safety Committees (ULSCs) in 3 rd tier under OPSRC

  1. Instructions for the Tier I Emergency and Hazardous Chemical Inventory Form

    Science.gov (United States)

    The purpose of the Emergency Planning and Community Right-to-Know Act Tier I form is to provide State and local officials and the public with information on the general chemical hazard types and locations at your facility, if above reporting thresholds.

  2. Explorations Around "Graceful Failure" in Transportation Infrastructure: Lessons Learned By the Infrastructure and Climate Network (ICNet)

    Science.gov (United States)

    Jacobs, J. M.; Thomas, N.; Mo, W.; Kirshen, P. H.; Douglas, E. M.; Daniel, J.; Bell, E.; Friess, L.; Mallick, R.; Kartez, J.; Hayhoe, K.; Croope, S.

    2014-12-01

    Recent events have demonstrated that the United States' transportation infrastructure is highly vulnerable to extreme weather events which will likely increase in the future. In light of the 60% shortfall of the $900 billion investment needed over the next five years to maintain this aging infrastructure, hardening of all infrastructures is unlikely. Alternative strategies are needed to ensure that critical aspects of the transportation network are maintained during climate extremes. Preliminary concepts around multi-tier service expectations of bridges and roads with reference to network capacity will be presented. Drawing from recent flooding events across the U.S., specific examples for roads/pavement will be used to illustrate impacts, disruptions, and trade-offs between performance during events and subsequent damage. This talk will also address policy and cultural norms within the civil engineering practice that will likely challenge the application of graceful failure pathways during extreme events.

  3. 78 FR 20881 - Control of Air Pollution From Motor Vehicles: Tier 3 Motor Vehicle Emission and Fuel Standards...

    Science.gov (United States)

    2013-04-08

    ...The EPA is announcing two public hearings to be held for the proposed rule ``Control of Air Pollution from Motor Vehicles: Tier 3 Motor Vehicle Emission and Fuel Standards'' (the proposed rule is hereinafter referred to as ``Tier 3''), which will be published separately in the Federal Register. The hearings will be held in Philadelphia, PA on April 24, 2013 and in Chicago, IL on April 29, 2013. The comment period for the proposed rulemaking will end on June 13, 2013.

  4. Building safeguards infrastructure

    International Nuclear Information System (INIS)

    McClelland-Kerr, J.; Stevens, J.

    2010-01-01

    Much has been written in recent years about the nuclear renaissance - the rebirth of nuclear power as a clean and safe source of electricity around the world. Those who question the nuclear renaissance often cite the risk of proliferation, accidents or an attack on a facility as concerns, all of which merit serious consideration. The integration of three areas - sometimes referred to as 3S, for safety, security and safeguards - is essential to supporting the clean and safe growth of nuclear power, and the infrastructure that supports these three areas should be robust. The focus of this paper will be on the development of the infrastructure necessary to support safeguards, and the integration of safeguards infrastructure with other elements critical to ensuring nuclear energy security

  5. Customer Satisfaction versus Infrastructural Facilities in the Realm of Higher Education--A Case Study of Sri Venkateswara University Tirupati

    Science.gov (United States)

    Janardhana, G.; Rajasekhar, Mamilla

    2012-01-01

    This article analyses the levels of students' satisfaction and how institution provides infrastructure facilities in the field of higher education. Infrastructure is the fastest growing segment of the higher education scenario. Universities play a very vital role in a country in terms of their potential. It contributes to employment and growth.…

  6. Improving three-tier environmental assessment model by using a 3D scanning FLS-AM series hyperspectral lidar

    Science.gov (United States)

    Samberg, Andre; Babichenko, Sergei; Poryvkina, Larisa

    2005-05-01

    Delay between the time when natural disaster, for example, oil accident in coastal water, occurred and the time when environmental protection actions, for example, water and shoreline clean-up, started is of significant importance. Mostly remote sensing techniques are considered as (near) real-time and suitable for multiple tasks. These techniques in combination with rapid environmental assessment methodologies would form multi-tier environmental assessment model, which allows creating (near) real-time datasets and optimizing sampling scenarios. This paper presents the idea of three-tier environmental assessment model. Here all three tiers are briefly described to show the linkages between them, with a particular focus on the first tier. Furthermore, it is described how large-scale environmental assessment can be improved by using an airborne 3-D scanning FLS-AM series hyperspectral lidar. This new aircraft-based sensor is typically applied for oil mapping on sea/ground surface and extracting optical features of subjects. In general, a sampling network, which is based on three-tier environmental assessment model, can include ship(s) and aircraft(s). The airborne 3-D scanning FLS-AM series hyperspectral lidar helps to speed up the whole process of assessing of area of natural disaster significantly, because this is a real-time remote sensing mean. For instance, it can deliver such information as georeferenced oil spill position in WGS-84, the estimated size of the whole oil spill, and the estimated amount of oil in seawater or on ground. All information is produced in digital form and, thus, can be directly transferred into a customer"s GIS (Geographical Information System) system.

  7. Genomic sequencing in cystic fibrosis newborn screening: what works best, two-tier predefined CFTR mutation panels or second-tier CFTR panel followed by third-tier sequencing?

    Science.gov (United States)

    Currier, Robert J; Sciortino, Stan; Liu, Ruiling; Bishop, Tracey; Alikhani Koupaei, Rasoul; Feuchtbaum, Lisa

    2017-10-01

    PurposeThe purpose of this study was to model the performance of several known two-tier, predefined mutation panels and three-tier algorithms for cystic fibrosis (CF) screening utilizing the ethnically diverse California population.MethodsThe cystic fibrosis transmembrane conductance regulator (CFTR) mutations identified among the 317 CF cases in California screened between 12 August 2008 and 18 December 2012 were used to compare the expected CF detection rates for several two- and three-tier screening approaches, including the current California approach, which consists of a population-specific 40-mutation panel followed by third-tier sequencing when indicated.ResultsThe data show that the strategy of using third-tier sequencing improves CF detection following an initial elevated immunoreactive trypsinogen and detection of only one mutation on a second-tier panel.ConclusionIn a diverse population, the use of a second-tier panel followed by third-tier CFTR gene sequencing provides a better detection rate for CF, compared with the use of a second-tier approach alone, and is an effective way to minimize the referrals of CF carriers for sweat testing. Restricting screening to a second-tier testing to predefined mutation panels, even broad ones, results in some missed CF cases and demonstrates the limited utility of this approach in states that have diverse multiethnic populations.

  8. Regulatory role and approach of BARC Safety Council in safety and occupational health in BARC facilities

    International Nuclear Information System (INIS)

    Rajdeep; Jayarajan, K.; Taly, Y.K.

    2016-01-01

    Bhabha Atomic Research Centre is involved in multidisciplinary research and developmental activities, related to peaceful use of nuclear energy and its societal benefits. In order to achieve high level of performance of these facilities, the best efforts are made to maintain good health of the plant personnel and good working conditions. BARC Safety Council (BSC), which is the regulatory body for BARC facilities, regulates radiation safety, industrial safety and surveillance of occupational health, by implementing various rules and guidelines in BARC facilities. BARC Safety framework consists of various committees in a 3-tier system. The first tier is BSC, which is the apex body authorized for issuing directives, permissions, consents and authorizations. It is having responsibility of ensuring protection and safety of public, environment, personnel and facilities of BARC through enforcement of radiation protection and industrial safety programmes. Besides the 18 committees in 2"n"d tier, there are 6 other expert committees which assist in functioning of BSC. (author)

  9. 40 CFR 141.204 - Tier 3 Public Notice-Form, manner, and frequency of notice.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Tier 3 Public Notice-Form, manner, and frequency of notice. 141.204 Section 141.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...., house renters, apartment dwellers, university students, nursing home patients, prison inmates, etc...

  10. a System Dynamics Model to Study the Importance of Infrastructure Facilities on Quality of Primary Education System in Developing Countries

    Science.gov (United States)

    Pedamallu, Chandra Sekhar; Ozdamar, Linet; Weber, Gerhard-Wilhelm; Kropat, Erik

    2010-06-01

    The system dynamics approach is a holistic way of solving problems in real-time scenarios. This is a powerful methodology and computer simulation modeling technique for framing, analyzing, and discussing complex issues and problems. System dynamics modeling and simulation is often the background of a systemic thinking approach and has become a management and organizational development paradigm. This paper proposes a system dynamics approach for study the importance of infrastructure facilities on quality of primary education system in developing nations. The model is proposed to be built using the Cross Impact Analysis (CIA) method of relating entities and attributes relevant to the primary education system in any given community. We offer a survey to build the cross-impact correlation matrix and, hence, to better understand the primary education system and importance of infrastructural facilities on quality of primary education. The resulting model enables us to predict the effects of infrastructural facilities on the access of primary education by the community. This may support policy makers to take more effective actions in campaigns.

  11. Research infrastructures of pan-European interest: The EU and Global issues

    Energy Technology Data Exchange (ETDEWEB)

    Pero, Herve, E-mail: Herve.Pero@ec.europa.e [' Research Infrastructures' Unit, DG Research, European Commission, Brussels (Belgium)

    2011-01-21

    Research Infrastructures act as 'knowledge industries' for the society and as a source of attraction for world scientists. At European level, the long-term objective is to support an efficient and world-class eco-system of Research Infrastructures, encompassing not only the large single-site facilities but also distributed research infrastructures, based on a network of 'regional partner facilities', with strong links with world-class universities and centres of excellence. The EC support activities help to promote the development of this fabric of research infrastructures of the highest quality and performance in Europe. Since 2002 ESFRI is also aimed at supporting a coherent approach to policy-making on research infrastructures. The European Roadmap for Research Infrastructures is ESFRI's most significant achievement to date, and KM3Net is one of its identified projects. The current Community support to the Preparatory Phase of this project aims at solving mainly governance, financial, organisational and legal issues. How should KM3Net help contributing to an efficient Research Infrastructure eco-system? This is the question to which the KM3Net stakeholders need to be able to answer very soon!

  12. Research infrastructures of pan-European interest: The EU and Global issues

    International Nuclear Information System (INIS)

    Pero, Herve

    2011-01-01

    Research Infrastructures act as 'knowledge industries' for the society and as a source of attraction for world scientists. At European level, the long-term objective is to support an efficient and world-class eco-system of Research Infrastructures, encompassing not only the large single-site facilities but also distributed research infrastructures, based on a network of 'regional partner facilities', with strong links with world-class universities and centres of excellence. The EC support activities help to promote the development of this fabric of research infrastructures of the highest quality and performance in Europe. Since 2002 ESFRI is also aimed at supporting a coherent approach to policy-making on research infrastructures. The European Roadmap for Research Infrastructures is ESFRI's most significant achievement to date, and KM3Net is one of its identified projects. The current Community support to the Preparatory Phase of this project aims at solving mainly governance, financial, organisational and legal issues. How should KM3Net help contributing to an efficient Research Infrastructure eco-system? This is the question to which the KM3Net stakeholders need to be able to answer very soon!

  13. Tier-1 and Tier-2 real-time analysis experience in CMS Data Challenge 2004

    CERN Document Server

    De Filippis, N; Pierro, A; Silvestris, L; Fanfani, A; Grandi, C; Hernández, J M; Bonacorsi, D; Corvo, M; Fanzago, F

    2005-01-01

    During the CMS Data Challenge 2004 a real-time analysis was attempted at INFN and PIC Tier-1 and Tier-2s in order to test the ability of the instrumented methods to quickly process the data. Several agents and automatic procedures were implemented to perform the analysis at the Tier-1/2 synchronously with the data transfer from Tier-0 at CERN. The system was implemented in the LCG-2 Grid environment and allowed on-the-fly job preparation and subsequent submission to the Resource Broker as new data came along. Running job accessed data from the Storage Elements via remote file protocol, whenever possible, or copying them locally with replica manager commands. Details of the procedures adopted to run the analysis jobs and the expected results are described. An evaluation of the ability of the system to maintain an analysis rate at Tier-1 and Tier-2 comparable with the data transfer rate is also presented. The results on the analysis timeline, the statistics of submitted jobs, the overall efficiency of the GRID ...

  14. 43 CFR 404.9 - What types of infrastructure and facilities may be included in an eligible rural water supply...

    Science.gov (United States)

    2010-10-01

    ... facilities may be included in an eligible rural water supply project? 404.9 Section 404.9 Public Lands... RURAL WATER SUPPLY PROGRAM Overview § 404.9 What types of infrastructure and facilities may be included in an eligible rural water supply project? A rural water supply project may include, but is not...

  15. PROOF-based analysis on the ATLAS grid facilities: first experience with the PoD/PanDa plugin

    International Nuclear Information System (INIS)

    Vilucchi, E; Nardo, R Di; Mancini, G; Pineda, A R Sanchez; Salvo, A De; Donato, C Di; Doria, A; Ganis, G; Manafov, A; Mazza, S; Preltz, F; Rebatto, D; Salvucci, A

    2014-01-01

    In the ATLAS computing model Grid resources are managed by PanDA, the system designed for production and distributed analysis, and data are stored under various formats in ROOT files. End-user physicists have the choice to use either the ATHENA framework or directly ROOT, that provides users the possibility to use PROOF to exploit the computing power of multi-core machines or to dynamically manage analysis facilities. Since analysis facilities are, in general, not dedicated to PROOF only, PROOF-on-Demand (PoD) is used to enable PROOF on top of an existing resource management system. In a previous work we investigated the usage of PoD to enable PROOF-based analysis on Tier-2 facilities using the PoD/gLite plug-in interface. In this paper we present the status of our investigations using the recently developed PoD/PanDA plug-in to enable PROOF and a real end-user ATLAS physics analysis as payload. For this work, data were accessed using two different protocols: XRootD and file protocol. The former in the site where the SRM interface is Disk Pool Manager (DPM) and the latter where the SRM interface is StoRM with GPFS file system. We will first describe the results of some benchmark tests we run on the ATLAS Italian Tier-1 and Tier-2s sites and at CERN. Then, we will compare the results of different types of analysis, comparing performances accessing data in relation to different types of SRM interfaces and accessing data with XRootD in the LAN and in the WAN using the ATLAS XROOTD storage federation infrastructure.

  16. Perspectives for photonuclear research at the Extreme Light Infrastructure - Nuclear Physics (ELI-NP) facility

    Energy Technology Data Exchange (ETDEWEB)

    Filipescu, D.; Balabanski, D.L.; Constantin, P.; Gales, S.; Tesileanu, O.; Ur, C.A.; Ursu, I.; Zamfir, N.V. [Horia Hulubei National Institute for R and D in Physics and Nuclear Engineering (IFIN-HH), Extreme Light Infrastructure - Nuclear Physics (ELI-NP), Bucharest-Magurele (Romania); Anzalone, A.; La Cognata, M.; Spitaleri, C. [INFN-LNS, Catania (Italy); Belyshev, S.S. [Lomonosov Moscow State University, Physics Faculty, Moscow (Russian Federation); Camera, F. [Departement of Physics, University of Milano, Milano (Italy); INFN section of Milano, Milano (Italy); Csige, L.; Krasznahorkay, A. [Hungarian Academy of Sciences (MTA Atomki), Institute of Nuclear Research, Post Office Box 51, Debrecen (Hungary); Cuong, P.V. [Vietnam Academy of Science and Technology, Centre of Nuclear Physics, Institute of Physics, Hanoi (Viet Nam); Cwiok, M.; Dominik, W.; Mazzocchi, C. [University of Warsaw, Warszawa (Poland); Derya, V.; Zilges, A. [University of Cologne, Institute for Nuclear Physics, Cologne (Germany); Gai, M. [University of Connecticut, LNS at Avery Point, Connecticut, Groton (United States); Gheorghe, I. [Horia Hulubei National Institute for R and D in Physics and Nuclear Engineering (IFIN-HH), Extreme Light Infrastructure - Nuclear Physics (ELI-NP), Bucharest-Magurele (Romania); University of Bucharest, Nuclear Physics Department, Post Office Box MG-11, Bucharest-Magurele (Romania); Ishkhanov, B.S. [Lomonosov Moscow State University, Physics Faculty, Moscow (Russian Federation); Lomonosov Moscow State University, Skobeltsyn Institute of Nuclear Physics, Moscow (Russian Federation); Kuznetsov, A.A.; Orlin, V.N.; Stopani, K.A.; Varlamov, V.V. [Lomonosov Moscow State University, Skobeltsyn Institute of Nuclear Physics, Moscow (Russian Federation); Pietralla, N. [Technische Universitat Darmstadt, Institut fur Kernphysik, Darmstadt (Germany); Sin, M. [University of Bucharest, Nuclear Physics Department, Post Office Box MG-11, Bucharest-Magurele (Romania); Utsunomiya, H. [Konan University, Department of Physics, Kobe (Japan); University of Tokyo, Center for Nuclear Study, Saitama (Japan); Weller, H.R. [Triangle Universities Nuclear Laboratory, North Carolina, Durham (United States); Duke University, Department of Physics, North Carolina, Durham (United States)

    2015-12-15

    The perspectives for photonuclear experiments at the new Extreme Light Infrastructure - Nuclear Physics (ELI-NP) facility are discussed in view of the need to accumulate novel and more precise nuclear data. The parameters of the ELI-NP gamma beam system are presented. The emerging experimental program, which will be realized at ELI-NP, is presented. Examples of day-one experiments with the nuclear resonance fluorescence technique, photonuclear reaction measurements, photofission experiments and studies of nuclear collective excitation modes and competition between various decay channels are discussed. The advantages which ELI-NP provides for all these experiments compared to the existing facilities are discussed. (orig.)

  17. The Two-Tier Fecal Occult Blood Test: Cost-Effective Screening

    Directory of Open Access Journals (Sweden)

    Andrew J Rae

    1994-01-01

    Full Text Available The two-tier test represents a strategy combining HO Sensa and Hemeselect fecal occult blood tests (FOBTs with the aim of greater specificity and consequent economic advantages. If patients register a positive result on any HO Sensa guaiac test, they are once again tested by a hemoglobin-specific Hemeselect test. This concept was applied to a multicentre study involving persons 40 years or older. One component of the study enrolled 573 high risk patients while the second arm recruited an additional 1301 patients (52% asymptomatic/48% symptomatic stratified according to personal history and symptoms. The two-tier test produced fewer false positives than traditional tests in both groups evaluated in the study. In the high risk group, specificity (88.7% for two-tier versus 80.6% for Hemoccult and 69.5% for HO Sensa was higher and false positive rates were lower (11.3% for two-tier versus 19.5% for Hemoccultand 30.5% for HO Sensa for the two-tier test versus Hemoccult and HO Sensa FOBTs (95% CI for all colorectal cancers [CRCs] and polyps greater than 1 cm, α=0.05 . No significant differences in sensitivity were observed between tests in the same group. Also, in the high risk group, benefits of the two-tier test outweighed the costs. Due to the small number of cancers and polyps in the second arm of the study, presentation of data is meant to be descriptive and representative of trends in a ‘normal’ population. Nevertheless, specificity of the two-tier test was higher (96.8% for two-tier versus 87.2% for Hemoccult and 69.5% for HO Sensa and false positive rate lower (3.2% for two-tier versus 12.8% for Hemoccult and 22.3% for HO Sensa than either the Hemoccult or HO Sensa FOBT (95% CI for all CRCs and polyps greater than 1 cm. This initial study, focusing on the cost-benefit relationship of increased specificity, represents a new way of economically evaluating existing FOBTs.

  18. The Effects of a Tier 3 Intervention on the Mathematics Performance of Second Grade Students With Severe Mathematics Difficulties.

    Science.gov (United States)

    Bryant, Brian R; Bryant, Diane Pedrotty; Porterfield, Jennifer; Dennis, Minyi Shih; Falcomata, Terry; Valentine, Courtney; Brewer, Chelsea; Bell, Kathy

    2016-01-01

    The purpose of this study was to determine the effectiveness of a systematic, explicit, intensive Tier 3 (tertiary) intervention on the mathematics performance of students in second grade with severe mathematics difficulties. A multiple-baseline design across groups of participants showed improved mathematics performance on number and operations concepts and procedures, which are the foundation for later mathematics success. In the previous year, 12 participants had experienced two doses (first and second semesters) of a Tier 2 intervention. In second grade, the participants continued to demonstrate low performance, falling below the 10th percentile on a researcher-designed universal screener and below the 16th percentile on a distal measure, thus qualifying for the intensive intervention. A project interventionist, who met with the students 5 days a week for 10 weeks (9 weeks for one group), conducted the intensive intervention. The intervention employed more intensive instructional design features than the previous Tier 2 secondary instruction, and also included weekly games to reinforce concepts and skills from the lessons. Spring results showed significantly improved mathematics performance (scoring at or above the 25th percentile) for most of the students, thus making them eligible to exit the Tier 3 intervention. © Hammill Institute on Disabilities 2014.

  19. Management of virtualized infrastructure for physics databases

    International Nuclear Information System (INIS)

    Topurov, Anton; Gallerani, Luigi; Chatal, Francois; Piorkowski, Mariusz

    2012-01-01

    Demands for information storage of physics metadata are rapidly increasing together with the requirements for its high availability. Most of the HEP laboratories are struggling to squeeze more from their computer centers, thus focus on virtualizing available resources. CERN started investigating database virtualization in early 2006, first by testing database performance and stability on native Xen. Since then we have been closely evaluating the constantly evolving functionality of virtualisation solutions for database and middle tier together with the associated management applications – Oracle's Enterprise Manager and VM Manager. This session will detail our long experience in dealing with virtualized environments, focusing on newest Oracle OVM 3.0 for x86 and Oracle Enterprise Manager functionality for efficiently managing your virtualized database infrastructure.

  20. Testing and evaluating storage technology to build a distributed Tier1 for SuperB in Italy

    International Nuclear Information System (INIS)

    Pardi, S; Delprete, D; Russo, G; Fella, A; Corvo, M; Bianchi, F; Ciaschini, V; Giacomini, F; Simone, A Di; Donvito, G; Santeramo, B; Gianoli, A; Luppi, E; Manzali, M; Tomassetti, L; Longo, S; Stroili, R; Luitz, S; Perez, A; Rama, M

    2012-01-01

    The SuperB asymmetric energy e + e −- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab −-1 and a luminosity target of 10 36 cm −-2 s −-1 . This luminosity translate in the requirement of storing more than 50 PByte of additional data each year, making SuperB an interesting challenge to the data management infrastructure, both at site level as at Wide Area Network level. A new Tier1, distributed among 3 or 4 sites in the south of Italy, is planned as part of the SuperB computing infrastructure. Data storage is a relevant topic whose development affects the way to configure and setup storage infrastructure both in local computing cluster and in a distributed paradigm. In this work we report the test on the software for data distribution and data replica focusing on the experiences made with Hadoop and GlusterFS.

  1. Building safeguards infrastructure

    International Nuclear Information System (INIS)

    Stevens, Rebecca S.; McClelland-Kerr, John

    2009-01-01

    Much has been written in recent years about the nuclear renaissance - the rebirth of nuclear power as a clean and safe source of electricity around the world. Those who question the nuclear renaissance often cite the risk of proliferation, accidents or an attack on a facility as concerns, all of which merit serious consideration. The integration of these three areas - sometimes referred to as 3S, for safety, security and safeguards - is essential to supporting the growth of nuclear power, and the infrastructure that supports them should be strengthened. The focus of this paper will be on the role safeguards plays in the 3S concept and how to support the development of the infrastructure necessary to support safeguards. The objective of this paper has been to provide a working definition of safeguards infrastructure, and to discuss xamples of how building safeguards infrastructure is presented in several models. The guidelines outlined in the milestones document provide a clear path for establishing both the safeguards and the related infrastructures needed to support the development of nuclear power. The model employed by the INSEP program of engaging with partner states on safeguards-related topics that are of current interest to the level of nuclear development in that state provides another way of approaching the concept of building safeguards infrastructure. The Next Generation Safeguards Initiative is yet another approach that underscored five principal areas for growth, and the United States commitment to working with partners to promote this growth both at home and abroad.

  2. CEMS: Building a Cloud-Based Infrastructure to Support Climate and Environmental Data Services

    Science.gov (United States)

    Kershaw, P. J.; Curtis, M.; Pechorro, E.

    2012-04-01

    CEMS, the facility for Climate and Environmental Monitoring from Space, is a new joint collaboration between academia and industry to bring together their collective expertise to support research into climate change and provide a catalyst for growth in related Earth Observation (EO) technologies and services in the commercial sector. A recent major investment by the UK Space Agency has made possible the development of a dedicated facility at ISIC, the International Space Innovation Centre at Harwell in the UK. CEMS has a number of key elements: the provision of access to large-volume EO and climate datasets co-located with high performance computing facilities; a flexible infrastructure to support the needs of research projects in the academic community and new business opportunities for commercial companies. Expertise and tools for scientific data quality and integrity are another essential component, giving users confidence and transparency in its data, services and products. Central to the development of this infrastructure is the utilisation of cloud-based technology: multi-tenancy and the dynamic provision of resources are key characteristics to exploit in order to support the range of organisations using the facilities and the varied use cases. The hosting of processing services and applications next to the data within the CEMS facility is another important capability. With the expected exponential increase in data volumes within the climate science and EO domains it is becoming increasingly impracticable for organisations to retrieve this data over networks and provide the necessary storage. Consider for example, the factor of o20 increase in data volumes expected for the ESA Sentinel missions over the equivalent Envisat instruments. We explore the options for the provision of a hybrid community/private cloud looking at offerings from the commercial sector and developments in the Open Source community. Building on this virtualisation layer, a further core

  3. Three-tier rough superhydrophobic surfaces

    International Nuclear Information System (INIS)

    Cao, Yuanzhi; Yuan, Longyan; Hu, Bin; Zhou, Jun

    2015-01-01

    A three-tier rough superhydrophobic surface was fabricated by growing hydrophobic modified (fluorinated silane) zinc oxide (ZnO)/copper oxide (CuO) hetero-hierarchical structures on silicon (Si) micro-pillar arrays. Compared with the other three control samples with a less rough tier, the three-tier surface exhibits the best water repellency with the largest contact angle 161° and the lowest sliding angle 0.5°. It also shows a robust Cassie state which enables the water to flow with a speed over 2 m s"−"1. In addition, it could prevent itself from being wetted by the droplet with low surface tension (mixed water and ethanol 1:1 in volume) which reveals a flow speed of 0.6 m s"−"1 (dropped from the height of 2 cm). All these features prove that adding another rough tier on a two-tier rough surface could futher improve its water-repellent properties. (paper)

  4. Regulatory measures for occupational health monitoring in BARC facilities

    International Nuclear Information System (INIS)

    Rajdeep; Chattopadhyay, S.

    2017-01-01

    Bhabha Atomic Research Centre (BARC) is the premier organization actively engaged in the research and developmental activities related to nuclear science and technology for the benefit of society and the nation. BARC has various facilities like nuclear fuel fabrication facilities, research reactors, spent fuel storage facilities, nuclear fuel re-cycling facilities, radioactive waste management facilities, machining workshops and various Physics, Chemistry and Biological laboratories. In BARC, aspects related to Occupational Safety and Health (OSH) are given paramount importance. The issues related OSH are subjected to multi-tier review process. BARC Safety Council (BSC) is the apex committee in the three-tier safety and security review framework of BARC. BSC functions as regulatory body for BARC facilities. BSC is responsible for occupational safety and health of employees in BARC facilities

  5. Vulnerability Assessments and Resilience Planning at Federal Facilities. Preliminary Synthesis of Project

    Energy Technology Data Exchange (ETDEWEB)

    Moss, R. H. [Pacific Northwest National Lab. (PNNL)/Univ. of Maryland, College Park, MD (United States). Joint Global Change Research Inst.; Blohm, A. J. [Univ. of Maryland, College Park, MD (United States); Delgado, A. [Pacific Northwest National Lab. (PNNL)/Univ. of Maryland, College Park, MD (United States). Joint Global Change Research Inst.; Henriques, J. J. [James Madison Univ., Harrisonburg, VA (United States); Malone, E L. [Pacific Northwest National Lab. (PNNL)/Univ. of Maryland, College Park, MD (United States). Joint Global Change Research Inst.

    2015-08-15

    U.S. government agencies are now directed to assess the vulnerability of their operations and facilities to climate change and to develop adaptation plans to increase their resilience. Specific guidance on methods is still evolving based on the many different available frameworks. Agencies have been experimenting with these frameworks and approaches. This technical paper synthesizes lessons and insights from a series of research case studies conducted by the investigators at facilities of the U.S. Department of Energy and the Department of Defense. The purpose of the paper is to solicit comments and feedback from interested program managers and analysts before final conclusions are published. The paper describes the characteristics of a systematic process for prioritizing needs for adaptation planning at individual facilities and examines requirements and methods needed. It then suggests a framework of steps for vulnerability assessments at Federal facilities and elaborates on three sets of methods required for assessments, regardless of the detailed framework used. In a concluding section, the paper suggests a roadmap to further develop methods to support agencies in preparing for climate change. The case studies point to several preliminary conclusions; (1) Vulnerability assessments are needed to translate potential changes in climate exposure to estimates of impacts and evaluation of their significance for operations and mission attainment, in other words into information that is related to and useful in ongoing planning, management, and decision-making processes; (2) To increase the relevance and utility of vulnerability assessments to site personnel, the assessment process needs to emphasize the characteristics of the site infrastructure, not just climate change; (3) A multi-tiered framework that includes screening, vulnerability assessments at the most vulnerable installations, and adaptation design will efficiently target high-risk sites and infrastructure

  6. Association between infrastructure and observed quality of care in 4 healthcare services: A cross-sectional study of 4,300 facilities in 8 countries.

    Directory of Open Access Journals (Sweden)

    Hannah H Leslie

    2017-12-01

    Full Text Available It is increasingly apparent that access to healthcare without adequate quality of care is insufficient to improve population health outcomes. We assess whether the most commonly measured attribute of health facilities in low- and middle-income countries (LMICs-the structural inputs to care-predicts the clinical quality of care provided to patients.Service Provision Assessments are nationally representative health facility surveys conducted by the Demographic and Health Survey Program with support from the US Agency for International Development. These surveys assess health system capacity in LMICs. We drew data from assessments conducted in 8 countries between 2007 and 2015: Haiti, Kenya, Malawi, Namibia, Rwanda, Senegal, Tanzania, and Uganda. The surveys included an audit of facility infrastructure and direct observation of family planning, antenatal care (ANC, sick-child care, and (in 2 countries labor and delivery. To measure structural inputs, we constructed indices that measured World Health Organization-recommended amenities, equipment, and medications in each service. For clinical quality, we used data from direct observations of care to calculate providers' adherence to evidence-based care guidelines. We assessed the correlation between these metrics and used spline models to test for the presence of a minimum input threshold associated with good clinical quality. Inclusion criteria were met by 32,531 observations of care in 4,354 facilities. Facilities demonstrated moderate levels of infrastructure, ranging from 0.63 of 1 in sick-child care to 0.75 of 1 for family planning on average. Adherence to evidence-based guidelines was low, with an average of 37% adherence in sick-child care, 46% in family planning, 60% in labor and delivery, and 61% in ANC. Correlation between infrastructure and evidence-based care was low (median 0.20, range from -0.03 for family planning in Senegal to 0.40 for ANC in Tanzania. Facilities with similar

  7. Association between infrastructure and observed quality of care in 4 healthcare services: A cross-sectional study of 4,300 facilities in 8 countries.

    Science.gov (United States)

    Leslie, Hannah H; Sun, Zeye; Kruk, Margaret E

    2017-12-01

    It is increasingly apparent that access to healthcare without adequate quality of care is insufficient to improve population health outcomes. We assess whether the most commonly measured attribute of health facilities in low- and middle-income countries (LMICs)-the structural inputs to care-predicts the clinical quality of care provided to patients. Service Provision Assessments are nationally representative health facility surveys conducted by the Demographic and Health Survey Program with support from the US Agency for International Development. These surveys assess health system capacity in LMICs. We drew data from assessments conducted in 8 countries between 2007 and 2015: Haiti, Kenya, Malawi, Namibia, Rwanda, Senegal, Tanzania, and Uganda. The surveys included an audit of facility infrastructure and direct observation of family planning, antenatal care (ANC), sick-child care, and (in 2 countries) labor and delivery. To measure structural inputs, we constructed indices that measured World Health Organization-recommended amenities, equipment, and medications in each service. For clinical quality, we used data from direct observations of care to calculate providers' adherence to evidence-based care guidelines. We assessed the correlation between these metrics and used spline models to test for the presence of a minimum input threshold associated with good clinical quality. Inclusion criteria were met by 32,531 observations of care in 4,354 facilities. Facilities demonstrated moderate levels of infrastructure, ranging from 0.63 of 1 in sick-child care to 0.75 of 1 for family planning on average. Adherence to evidence-based guidelines was low, with an average of 37% adherence in sick-child care, 46% in family planning, 60% in labor and delivery, and 61% in ANC. Correlation between infrastructure and evidence-based care was low (median 0.20, range from -0.03 for family planning in Senegal to 0.40 for ANC in Tanzania). Facilities with similar infrastructure scores

  8. Considérations sur l’infrastructure du Yémen1

    OpenAIRE

    Escher, Hermann A.

    2013-01-01

    1. LE TERME INFRASTRUCTURE Depuis quinze ans environ, on emploie de plus en plus fréquemment le terme « infrastructure » dans les discussions politiques et économiques. Il joue un rôle important dans toutes les considérations qui ont trait aux problèmes de développement des pays du tiers monde et des pays industrialisés. Il s’ensuit qu’il est le sujet de nombreux traités. Il n’est pas opportun de retracer ici l’évolution de ce terme et d’en présenter différentes définitions (voir Frey, 1979)....

  9. Handling Worldwide LHC Computing Grid Critical Service Incidents : The infrastructure and experience behind nearly 5 years of GGUS ALARMs

    CERN Multimedia

    Dimou, M; Dulov, O; Grein, G

    2013-01-01

    In the Wordwide LHC Computing Grid (WLCG) project the Tier centres are of paramount importance for storing and accessing experiment data and for running the batch jobs necessary for experiment production activities. Although Tier2 sites provide a significant fraction of the resources a non-availability of resources at the Tier0 or the Tier1s can seriously harm not only WLCG Operations but also the experiments' workflow and the storage of LHC data which are very expensive to reproduce. This is why availability requirements for these sites are high and committed in the WLCG Memorandum of Understanding (MoU). In this talk we describe the workflow of GGUS ALARMs, the only 24/7 mechanism available to LHC experiment experts for reporting to the Tier0 or the Tier1s problems with their Critical Services. Conclusions and experience gained from the detailed drills performed in each such ALARM for the last 4 years are explained and the shift with time of Type of Problems met. The physical infrastructure put in place to ...

  10. Near-Site Transportation Infrastructure Project

    International Nuclear Information System (INIS)

    Viebrock, J.M.; Mote, N.

    1992-02-01

    There are 122 commercial nuclear facilities from which spent nuclear fuel will be accepted by the Federal Waste Management System (FWMS). Since some facilities share common sites and some facilities are on adjacent sites, 76 sites were identified for the Near-Site Transportation Infrastructure (NSTI) project. The objective of the NSTI project was to identify the options available for transportation of spent-fuel casks from each of these commercial nuclear facility sites to the main transportation routes -- interstate highways, commercial rail lines and navigable waterways available for commercial use. The near-site transportation infrastructure from each site was assessed, based on observation of technical features identified during a survey of the routes and facilities plus data collected from referenced information sources. The potential for refurbishment of transportation facilities which are not currently operational was also assessed, as was the potential for establishing new transportation facilities

  11. Environmental assessment: Solid waste retrieval complex, enhanced radioactive and mixed waste storage facility, infrastructure upgrades, and central waste support complex, Hanford Site, Richland, Washington

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-09-01

    The U.S. Department of Energy (DOE) needs to take action to: retrieve transuranic (TRU) waste because interim storage waste containers have exceeded their 20-year design life and could fail causing a radioactive release to the environment provide storage capacity for retrieved and newly generated TRU, Greater-than-Category 3 (GTC3), and mixed waste before treatment and/or shipment to the Waste Isolation Pilot Project (WIPP); and upgrade the infrastructure network in the 200 West Area to enhance operational efficiencies and reduce the cost of operating the Solid Waste Operations Complex. This proposed action would initiate the retrieval activities (Retrieval) from Trench 4C-T04 in the 200 West Area including the construction of support facilities necessary to carry out the retrieval operations. In addition, the proposed action includes the construction and operation of a facility (Enhanced Radioactive Mixed Waste Storage Facility) in the 200 West Area to store newly generated and the retrieved waste while it awaits shipment to a final disposal site. Also, Infrastructure Upgrades and a Central Waste Support Complex are necessary to support the Hanford Site`s centralized waste management area in the 200 West Area. The proposed action also includes mitigation for the loss of priority shrub-steppe habitat resulting from construction. The estimated total cost of the proposed action is $66 million.

  12. Environmental assessment: Solid waste retrieval complex, enhanced radioactive and mixed waste storage facility, infrastructure upgrades, and central waste support complex, Hanford Site, Richland, Washington

    International Nuclear Information System (INIS)

    1995-09-01

    The U.S. Department of Energy (DOE) needs to take action to: retrieve transuranic (TRU) waste because interim storage waste containers have exceeded their 20-year design life and could fail causing a radioactive release to the environment provide storage capacity for retrieved and newly generated TRU, Greater-than-Category 3 (GTC3), and mixed waste before treatment and/or shipment to the Waste Isolation Pilot Project (WIPP); and upgrade the infrastructure network in the 200 West Area to enhance operational efficiencies and reduce the cost of operating the Solid Waste Operations Complex. This proposed action would initiate the retrieval activities (Retrieval) from Trench 4C-T04 in the 200 West Area including the construction of support facilities necessary to carry out the retrieval operations. In addition, the proposed action includes the construction and operation of a facility (Enhanced Radioactive Mixed Waste Storage Facility) in the 200 West Area to store newly generated and the retrieved waste while it awaits shipment to a final disposal site. Also, Infrastructure Upgrades and a Central Waste Support Complex are necessary to support the Hanford Site's centralized waste management area in the 200 West Area. The proposed action also includes mitigation for the loss of priority shrub-steppe habitat resulting from construction. The estimated total cost of the proposed action is $66 million

  13. Changes of ticagrelor formulary tiers in the USA: targeting private insurance providers away from government-funded plans.

    Science.gov (United States)

    Serebruany, Victor L; Dinicolantonio, James J

    2013-01-01

    Ticagrelor (Brilinta®) is a new oral reversible antiplatelet agent approved by the FDA in July 2011 based on the results of the PLATO (Platelet Inhibition and Patient Outcomes) trial. However, despite very favorable and broad indications, the current clinical utilization of ticagrelor is woefully small. We aimed to compare ticagrelor formulary tiers for major private (n = 8) and government-funded (n = 4) insurance providers for 2012-2013. Over the last year, ticagrelor placement improved, becoming a preferred drug (from Tier 3 in 2012 to Tier 2 in 2013) for Medco, moving from Tier 4 (with a prior approval requirement) to Tier 3 (no prior approval) for the United Health Care Private Plan and achieving Tier 3 status for Apex in 2013. In contrast, ticagrelor placement did not improve for New York Medicaid, retaining Tier 3 status. In addition, many Medicare Part D formularies have significantly worse coverage than most private plans. For example, Humana Medicare Part D has Tier 3 status requiring step therapy and quantity limits, SilverScript (CVS Caremark) Part D is Tier 3 and the American Association of Retired Persons (United Health Care) Medicare Part D is Tier 4 requiring prior approval. Ticagrelor formulary placement is significantly better for most private providers than for government-funded plans, which may possibly be due to the selective targeting of private insurance providers and the simultaneous avoidance of government-funded plans. © 2013 S. Karger AG, Basel.

  14. Nuclear Energy Infrastructure Database Description and User’s Manual

    Energy Technology Data Exchange (ETDEWEB)

    Heidrich, Brenden [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from a variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.

  15. NEMO-SN1 observatory developments in view of the European Research Infrastructures EMSO and KM3NET

    Energy Technology Data Exchange (ETDEWEB)

    Favali, Paolo, E-mail: emsopp@ingv.i [Istituto Nazionale di Geofisica e Vulcanologia (INGV), Sect. Roma 2, Via di Vigna Murata 605, 00143 Roma (Italy); Beranzoli, Laura [Istituto Nazionale di Geofisica e Vulcanologia (INGV), Sect. Roma 2, Via di Vigna Murata 605, 00143 Roma (Italy); Italiano, Francesco [Istituto Nazionale di Geofisica e Vulcanologia (INGV), Sect. Palermo, Via Ugo La Malfa 153, 90146 Palermo (Italy); Migneco, Emilio; Musumeci, Mario; Papaleo, Riccardo [Istituto Nazionale di Fisica Nucleare (INFN), Laboratori Nazionali del Sud, Via di S. Sofia 62, 95125 Catania (Italy)

    2011-01-21

    NEMO-SN1 (Western Ionian Sea off Eastern Sicily), the first real-time multiparameter observatory operating in Europe since 2005, is one of the nodes of the upcoming European ESFRI large-scale research infrastructure EMSO (European Multidisciplinary Seafloor Observatory), a network of seafloor observatories placed at marine sites on the European Continental Margin. NEMO-SN1 constitutes also an important test-site for the study of prototypes of Kilometre Cube Neutrino Telescope (KM3NeT), another European ESFRI large-scale research infrastructure. Italian resources have been devoted to the development of NEMO-SN1 facilities and logistics, as with the PEGASO project, while the EC project ESONET-NoE is funding a demonstration mission and a technological test. EMSO and KM3NeT are presently in the Preparatory Phase as projects funded under the EC-FP7.

  16. Development of Bioinformatics Infrastructure for Genomics Research.

    Science.gov (United States)

    Mulder, Nicola J; Adebiyi, Ezekiel; Adebiyi, Marion; Adeyemi, Seun; Ahmed, Azza; Ahmed, Rehab; Akanle, Bola; Alibi, Mohamed; Armstrong, Don L; Aron, Shaun; Ashano, Efejiro; Baichoo, Shakuntala; Benkahla, Alia; Brown, David K; Chimusa, Emile R; Fadlelmola, Faisal M; Falola, Dare; Fatumo, Segun; Ghedira, Kais; Ghouila, Amel; Hazelhurst, Scott; Isewon, Itunuoluwa; Jung, Segun; Kassim, Samar Kamal; Kayondo, Jonathan K; Mbiyavanga, Mamana; Meintjes, Ayton; Mohammed, Somia; Mosaku, Abayomi; Moussa, Ahmed; Muhammd, Mustafa; Mungloo-Dilmohamud, Zahra; Nashiru, Oyekanmi; Odia, Trust; Okafor, Adaobi; Oladipo, Olaleye; Osamor, Victor; Oyelade, Jellili; Sadki, Khalid; Salifu, Samson Pandam; Soyemi, Jumoke; Panji, Sumir; Radouani, Fouzia; Souiai, Oussama; Tastan Bishop, Özlem

    2017-06-01

    Although pockets of bioinformatics excellence have developed in Africa, generally, large-scale genomic data analysis has been limited by the availability of expertise and infrastructure. H3ABioNet, a pan-African bioinformatics network, was established to build capacity specifically to enable H3Africa (Human Heredity and Health in Africa) researchers to analyze their data in Africa. Since the inception of the H3Africa initiative, H3ABioNet's role has evolved in response to changing needs from the consortium and the African bioinformatics community. H3ABioNet set out to develop core bioinformatics infrastructure and capacity for genomics research in various aspects of data collection, transfer, storage, and analysis. Various resources have been developed to address genomic data management and analysis needs of H3Africa researchers and other scientific communities on the continent. NetMap was developed and used to build an accurate picture of network performance within Africa and between Africa and the rest of the world, and Globus Online has been rolled out to facilitate data transfer. A participant recruitment database was developed to monitor participant enrollment, and data is being harmonized through the use of ontologies and controlled vocabularies. The standardized metadata will be integrated to provide a search facility for H3Africa data and biospecimens. Because H3Africa projects are generating large-scale genomic data, facilities for analysis and interpretation are critical. H3ABioNet is implementing several data analysis platforms that provide a large range of bioinformatics tools or workflows, such as Galaxy, the Job Management System, and eBiokits. A set of reproducible, portable, and cloud-scalable pipelines to support the multiple H3Africa data types are also being developed and dockerized to enable execution on multiple computing infrastructures. In addition, new tools have been developed for analysis of the uniquely divergent African data and for

  17. Enhanced computational infrastructure for data analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; McHarg, B.B.; Meyer, W.H.; Parker, C.T.

    2000-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from nine national laboratories, 19 foreign laboratories, 16 universities, and five industrial partnerships. As a result of this work, DIII-D data is available on a 24x7 basis from a set of viewing and analysis tools that can be run on either the collaborators' or DIII-D's computer systems. Additionally, a web based data and code documentation system has been created to aid the novice and expert user alike

  18. Enhanced Computational Infrastructure for Data Analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; Meyer, W.H.; Parker, C.T.; McCharg, B.B.

    1999-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from 9 national laboratories, 19 foreign laboratories, 16 universities, and 5 industrial partnerships. As a result of this work, DIII-D data is available on a 24 x 7 basis from a set of viewing and analysis tools that can be run either on the collaborators' or DIII-Ds computer systems. Additionally, a Web based data and code documentation system has been created to aid the novice and expert user alike

  19. Tiered Approach to Resilience Assessment.

    Science.gov (United States)

    Linkov, Igor; Fox-Lent, Cate; Read, Laura; Allen, Craig R; Arnott, James C; Bellini, Emanuele; Coaffee, Jon; Florin, Marie-Valentine; Hatfield, Kirk; Hyde, Iain; Hynes, William; Jovanovic, Aleksandar; Kasperson, Roger; Katzenberger, John; Keys, Patrick W; Lambert, James H; Moss, Richard; Murdoch, Peter S; Palma-Oliveira, Jose; Pulwarty, Roger S; Sands, Dale; Thomas, Edward A; Tye, Mari R; Woods, David

    2018-04-25

    Regulatory agencies have long adopted a three-tier framework for risk assessment. We build on this structure to propose a tiered approach for resilience assessment that can be integrated into the existing regulatory processes. Comprehensive approaches to assessing resilience at appropriate and operational scales, reconciling analytical complexity as needed with stakeholder needs and resources available, and ultimately creating actionable recommendations to enhance resilience are still lacking. Our proposed framework consists of tiers by which analysts can select resilience assessment and decision support tools to inform associated management actions relative to the scope and urgency of the risk and the capacity of resource managers to improve system resilience. The resilience management framework proposed is not intended to supplant either risk management or the many existing efforts of resilience quantification method development, but instead provide a guide to selecting tools that are appropriate for the given analytic need. The goal of this tiered approach is to intentionally parallel the tiered approach used in regulatory contexts so that resilience assessment might be more easily and quickly integrated into existing structures and with existing policies. Published 2018. This article is a U.S. government work and is in the public domain in the USA.

  20. Tiering Effects in Third-party Logistics: A First-tier Buyer Perspective

    OpenAIRE

    Vainionpää, Mikael M.

    2010-01-01

    This doctoral dissertation takes a buy side perspective to third-party logistics (3PL) providers’ service tiering by applying a linear serial dyadic view to transactions. It takes its point of departure not only from the unalterable focus on the dyad levels as units of analysis and how to manage them, but also the characteristics both creating and determining purposeful conditions for a longer duration. A conceptual framework is proposed and evaluated on its ability to capture logistics se...

  1. Prospective Environmental Risk Assessment for Sediment-Bound Organic Chemicals: A Proposal for Tiered Effect Assessment.

    Science.gov (United States)

    Diepens, Noël J; Koelmans, Albert A; Baveco, Hans; van den Brink, Paul J; van den Heuvel-Greve, Martine J; Brock, Theo C M

    A broadly accepted framework for prospective environmental risk assessment (ERA) of sediment-bound organic chemicals is currently lacking. Such a framework requires clear protection goals, evidence-based concepts that link exposure to effects and a transparent tiered-effect assessment. In this paper, we provide a tiered prospective sediment ERA procedure for organic chemicals in sediment, with a focus on the applicable European regulations and the underlying data requirements. Using the ecosystem services concept, we derived specific protection goals for ecosystem service providing units: microorganisms, benthic algae, sediment-rooted macrophytes, benthic invertebrates and benthic vertebrates. Triggers for sediment toxicity testing are discussed.We recommend a tiered approach (Tier 0 through Tier 3). Tier-0 is a cost-effective screening based on chronic water-exposure toxicity data for pelagic species and equilibrium partitioning. Tier-1 is based on spiked sediment laboratory toxicity tests with standard benthic test species and standardised test methods. If comparable chronic toxicity data for both standard and additional benthic test species are available, the Species Sensitivity Distribution (SSD) approach is a more viable Tier-2 option than the geometric mean approach. This paper includes criteria for accepting results of sediment-spiked single species toxicity tests in prospective ERA, and for the application of the SSD approach. We propose micro/mesocosm experiments with spiked sediment, to study colonisation success by benthic organisms, as a Tier-3 option. Ecological effect models can be used to supplement the experimental tiers. A strategy for unifying information from various tiers by experimental work and exposure-and effect modelling is provided.

  2. Nuclear Energy Infrastructure Database Description and User's Manual

    International Nuclear Information System (INIS)

    Heidrich, Brenden

    2015-01-01

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE's infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from a variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.

  3. Concept of a spatial data infrastructure for web-mapping, processing and service provision for geo-hazards

    Science.gov (United States)

    Weinke, Elisabeth; Hölbling, Daniel; Albrecht, Florian; Friedl, Barbara

    2017-04-01

    for the possibility of rapid mapping. The server tier consists of java based web and GIS server. Sub and main services are part of the service tier. Sub services are for example map services, feature editing services, geometry services, geoprocessing services and metadata services. For (meta)data provision and to support data interoperability, web standards of the OGC and the rest-interface is used. Four central main services are designed and developed: (1) a mapping service (including image segmentation and classification approaches), (2) a monitoring service to monitor changes over time, (3) a validation service to analyze landslide delineations from different sources and (4) an infrastructure service to identify affected landslides. The main services use and combine parts of the sub services. Furthermore, a series of client applications based on new technology standards making use of the data and services offered by the spatial data infrastructure. Next steps include the design to extend the current spatial data infrastructure to other areas and geo-hazard types to develop a spatial data infrastructure that can assist targeted mapping and monitoring of geo-hazards on a global context.

  4. Large scale and low latency analysis facilities for the CMS experiment: development and operational aspects

    CERN Document Server

    Riahi, Hassen

    2010-01-01

    While a majority of CMS data analysis activities rely on the distributed computing infrastructure on the WLCG Grid, dedicated local computing facilities have been deployed to address particular requirements in terms of latency and scale. The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workfows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast turnaround. In order to reach the goal for fast turnaround tasks, the Workload Management group has designed a CRABServer based system to fit with two main needs: to provide a simple, familiar interface to the user (as used in the CRAB Analysis Tool[7]) and to allow an easy transition to the Tier-0 system. While the CRABServer component had been initially designed for Grid analysis by CMS end-users, with a few modifications it turned out to be also a very powerful service to manage and monitor local submissions on the CAF. Tran...

  5. Storage management solutions and performance tests at the INFN Tier-1

    International Nuclear Information System (INIS)

    Bencivenni, M; Carbone, A; Chierici, A; D'Apice, A; Girolamo, D D; Dell'Agnello, L; Donatelli, M; Fella, A; Forti, A; Ghiselli, A; Italiano, A; Re, G L; Magnoni, L; Martelli, B; Mazzucato, M; Donvito, G; Furano, F; Marconi, U; Galli, D; Lanciotti, E

    2008-01-01

    Performance, reliability and scalability in data access are key issues in the context of HEP data processing and analysis applications. In this paper we present the results of a large scale performance measurement performed at the INFN-CNAF Tier-1, employing some storage solutions presently available for HEP computing, namely CASTOR, GPFS, Scalla/Xrootd and dCache. The storage infrastructure was based on Fibre Channel systems organized in a Storage Area Network, providing 260 TB of total disk space, and 24 disk servers connected to the computing farm (280 worker nodes) via Gigabit LAN. We also describe the deployment of a StoRM SRM instance at CNAF, configured to manage a GPFS file system, presenting and discussing its performances

  6. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    CERN Document Server

    Bagnasco, S; Guarise, A; Lusso, S; Masera, M; Vallero, S

    2015-01-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monit...

  7. The 3D Elevation Program and America's infrastructure

    Science.gov (United States)

    Lukas, Vicki; Carswell, Jr., William J.

    2016-11-07

    Infrastructure—the physical framework of transportation, energy, communications, water supply, and other systems—and construction management—the overall planning, coordination, and control of a project from beginning to end—are critical to the Nation’s prosperity. The American Society of Civil Engineers has warned that, despite the importance of the Nation’s infrastructure, it is in fair to poor condition and needs sizable and urgent investments to maintain and modernize it, and to ensure that it is sustainable and resilient. Three-dimensional (3D) light detection and ranging (lidar) elevation data provide valuable productivity, safety, and cost-saving benefits to infrastructure improvement projects and associated construction management. By providing data to users, the 3D Elevation Program (3DEP) of the U.S. Geological Survey reduces users’ costs and risks and allows them to concentrate on their mission objectives. 3DEP includes (1) data acquisition partnerships that leverage funding, (2) contracts with experienced private mapping firms, (3) technical expertise, lidar data standards, and specifications, and (4) most important, public access to high-quality 3D elevation data. The size and breadth of improvements for the Nation’s infrastructure and construction management needs call for an efficient, systematic approach to acquiring foundational 3D elevation data. The 3DEP approach to national data coverage will yield large cost savings over individual project-by-project acquisitions and will ensure that data are accessible for other critical applications.

  8. 1995 Tier Two emergency and hazardous chemical inventory. Emergency Planning and Community Right-To-Know Act, Section 312

    International Nuclear Information System (INIS)

    1996-03-01

    Tier Two reports are required as part of the Superfund compliance. The purpose is to provide state and local officials and the public with specific information on hazardous chemicals present at a facility during the past year. The facility is required to provide specific information on description, hazards, amounts, and locations of all hazardous materials. This report compiled such information for the Hanford Reservation

  9. Tiered gasoline pricing: A personal carbon trading perspective

    International Nuclear Information System (INIS)

    Li, Yao; Fan, Jin; Zhao, Dingtao; Wu, Yanrui; Li, Jun

    2016-01-01

    This paper proffers a tiered gasoline pricing method from a personal carbon trading perspective. An optimization model of personal carbon trading is proposed, and then, an equilibrium carbon price is derived according to the market clearing condition. Based on the derived equilibrium carbon price, this paper proposes a calculation method of tiered gasoline pricing. Then, sensitivity analyses and consumers' surplus analyses are conducted. It can be shown that a rise in gasoline price or a more generous allowance allocation would incur a decrease in the equilibrium carbon price, making the first tiered price higher, but the second tiered price lower. It is further verified that the proposed tiered pricing method is progressive because it would relieve the pressure of the low-income groups who consume less gasoline while imposing a greater burden on the high-income groups who consume more gasoline. Based on these results, implications, limitations and suggestions for future studies are provided. - Highlights: • Tiered gasoline pricing is calculated from the perspective of PCT. • Consumers would be burdened with different actual gasoline costs. • A specific example is provided to illustrate the calculation of TGP. • The tiered pricing mechanism is a progressive system.

  10. Four Tiers

    Science.gov (United States)

    Moodie, Gavin

    2009-01-01

    This paper posits a classification of tertiary education institutions into four tiers: world research universities, selecting universities, recruiting universities, and vocational institutes. The distinguishing characteristic of world research universities is their research strength, the distinguishing characteristic of selecting universities is…

  11. A prototype Infrastructure for Cloud-based distributed services in High Availability over WAN

    International Nuclear Information System (INIS)

    Bulfon, C.; De Salvo, A.; Graziosi, C.; Carlino, G.; Doria, A; Pardi, S; Sanchez, A.; Carboni, M; Bolletta, P; Puccio, L.; Capone, V; Merola, L

    2015-01-01

    In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults.The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN. (paper)

  12. A prototype Infrastructure for Cloud-based distributed services in High Availability over WAN

    Science.gov (United States)

    Bulfon, C.; Carlino, G.; De Salvo, A.; Doria, A.; Graziosi, C.; Pardi, S.; Sanchez, A.; Carboni, M.; Bolletta, P.; Puccio, L.; Capone, V.; Merola, L.

    2015-12-01

    In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults. The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN.

  13. Technology Tiers

    DEFF Research Database (Denmark)

    Karlsson, Christer

    2015-01-01

    A technology tier is a level in a product system: final product, system, subsystem, component, or part. As a concept, it contrasts traditional “vertical” special technologies (for example, mechanics and electronics) and focuses “horizontal” feature technologies such as product characteristics...

  14. Storage resources management at the INFN-CNAF Tier-1

    International Nuclear Information System (INIS)

    Ricci, P.P.; Lo Re, G.; Vagnoni, V.

    2006-01-01

    At present we have 2 main mass storage systems for archiving HEP experimental data at the INFN-CNAF Tier-1: a HSM software system (CASTOR) and about 250 TB of different storage devices over SAN. This paper briefly describes our hardware and software environment, and summarizes the technical solutions adopted in order to obtain better availability and high data throughput from the front-end disk servers. In fact, our computing resources, consisting of farms of dual processor nodes (currently about 1000 nodes providing 1300 KspecInt2000), need to access the data through a fast and reliable I/O infrastructure. A valid solution for achieving large I/O throughputs is nowadays provided by parallel file systems. In the last part of this paper some results of detailed tests we performed with GPFS and Lustre over SAN are reported

  15. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    CERN Document Server

    Medrano Llamas, Ramón; Kucharczyk, Katarzyna; Denis, Marek Kamil; Cinquilli, Mattia

    2014-01-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain th...

  16. Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE): Conceptual Design Report. Volume 3: Long-Baseline Neutrino Facility for DUNE

    Energy Technology Data Exchange (ETDEWEB)

    Strait, James [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); McCluskey, Elaine [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Lundin, Tracy [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Willhite, Joshua [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Hamernik, Thomas [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Papadimitriou, Vaia [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Marchionni, Alberto [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Kim, Min Jeong [National Inst. of Nuclear Physics (INFN), Frascati (Italy). National Lab. of Frascati (INFN-LNF); Nessi, Marzio [Univ. of Geneva (Switzerland); Montanari, David [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Heavey, Anne [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States)

    2016-01-21

    This volume of the LBNF/DUNE Conceptual Design Report covers the Long-Baseline Neutrino Facility for DUNE and describes the LBNF Project, which includes design and construction of the beamline at Fermilab, the conventional facilities at both Fermilab and SURF, and the cryostat and cryogenics infrastructure required for the DUNE far detector.

  17. Space and Ground-Based Infrastructures

    Science.gov (United States)

    Weems, Jon; Zell, Martin

    This chapter deals first with the main characteristics of the space environment, outside and inside a spacecraft. Then the space and space-related (ground-based) infrastructures are described. The most important infrastructure is the International Space Station, which holds many European facilities (for instance the European Columbus Laboratory). Some of them, such as the Columbus External Payload Facility, are located outside the ISS to benefit from external space conditions. There is only one other example of orbital platforms, the Russian Foton/Bion Recoverable Orbital Capsule. In contrast, non-orbital weightless research platforms, although limited in experimental time, are more numerous: sounding rockets, parabolic flight aircraft, drop towers and high-altitude balloons. In addition to these facilities, there are a number of ground-based facilities and space simulators, for both life sciences (for instance: bed rest, clinostats) and physical sciences (for instance: magnetic compensation of gravity). Hypergravity can also be provided by human and non-human centrifuges.

  18. Identifying tier one key suppliers.

    Science.gov (United States)

    Wicks, Steve

    2013-01-01

    In today's global marketplace, businesses are becoming increasingly reliant on suppliers for the provision of key processes, activities, products and services in support of their strategic business goals. The result is that now, more than ever, the failure of a key supplier has potential to damage reputation, productivity, compliance and financial performance seriously. Yet despite this, there is no recognised standard or guidance for identifying a tier one key supplier base and, up to now, there has been little or no research on how to do so effectively. This paper outlines the key findings of a BCI-sponsored research project to investigate good practice in identifying tier one key suppliers, and suggests a scalable framework process model and risk matrix tool to help businesses effectively identify their tier one key supplier base.

  19. Research infrastructures of pan-European interest: The EU and Global issues

    Science.gov (United States)

    Pero, Hervé

    2011-01-01

    Research Infrastructures act as “knowledge industries” for the society and as a source of attraction for world scientists. At European level, the long-term objective is to support an efficient and world-class eco-system of Research Infrastructures, encompassing not only the large single-site facilities but also distributed research infrastructures, based on a network of “regional partner facilities”, with strong links with world-class universities and centres of excellence. The EC support activities help to promote the development of this fabric of research infrastructures of the highest quality and performance in Europe. Since 2002 ESFRI is also aimed at supporting a coherent approach to policy-making on research infrastructures. The European Roadmap for Research Infrastructures is ESFRI's most significant achievement to date, and KM3Net is one of its identified projects. The current Community support to the Preparatory Phase of this project aims at solving mainly governance, financial, organisational and legal issues. How should KM3Net help contributing to an efficient Research Infrastructure eco-system? This is the question to which the KM3Net stakeholders need to be able to answer very soon!

  20. Welcome to NNIN | National Nanotechnology Infrastructure Network

    Science.gov (United States)

    Skip to main content National Nanotechnology Infrastructure Network National Nanotechnology Infrastructure Network Serving Nanoscale Science, Engineering & Technology Search form Search Search Home facilities feature over 1100 modern nanotechnology instruments such as these Reactive Ion Etch systems at the

  1. 26 CFR 1.1446-5 - Tiered partnership structures.

    Science.gov (United States)

    2010-04-01

    ... defined in § 1.1446-4(b)(1)). (2) Lower-tier publicly traded partnership. The look through rules of... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Tiered partnership structures. 1.1446-5 Section...-Free Covenant Bonds § 1.1446-5 Tiered partnership structures. (a) In general. The rules of this section...

  2. Exercising CMS dataflows and workflows in computing challenges at the SpanishTier-1 and Tier-2 sites

    International Nuclear Information System (INIS)

    Caballero, J; Colino, N; Peris, A D; G-Abia, P; Hernandez, J M; R-Calonge, F J; Cabrillo, I; Caballero, I G; Marco, R; Matorras, F; Flix, J; Merino, G

    2008-01-01

    An overview of the data transfer, processing and analysis operations conducted at the Spanish Tier-1 (PIC, Barcelona) and Tier-2 (CIEMAT-Madrid and IFCA-Santander federation) centres during the past CMS CSA06 Computing, Software and Analysis challenge and in preparation for CSA07 is presented

  3. Data Centre Infrastructure & Data Storage @ Facebook

    CERN Multimedia

    CERN. Geneva; Garson, Matt; Kauffman, Mike

    2018-01-01

    Several speakers from the Facebook company will present their take on the infrastructure of their Data Center and Storage facilities, as follows: 10:00 - Facebook Data Center Infrastructure, by Delfina Eberly, Mike Kauffman and Veerendra Mulay Insight into how Facebook thinks about data center design, including electrical and cooling systems, and the technology and tooling used to manage data centers. 11:00 - Storage at Facebook, by Matt Garson An overview of Facebook infrastructure, focusing on different storage systems, in particular photo/video storage and storage for data analytics. About the speakers Mike Kauffman, Director, Data Center Site Engineering Delfina Eberly, Infrastructure, Site Services Matt Garson, Storage at Facebook Veerendra Mulay, Infrastructure

  4. Exercising CMS dataflows and workflows in computing challenges at the SpanishTier-1 and Tier-2 sites

    Energy Technology Data Exchange (ETDEWEB)

    Caballero, J; Colino, N; Peris, A D; G-Abia, P; Hernandez, J M; R-Calonge, F J [CIEMAT, Madrid (Spain); Cabrillo, I; Caballero, I G; Marco, R; Matorras, F [IFCA, Santander (Spain); Flix, J; Merino, G [PIC, Barcelona (Spain)], E-mail: jose.hernandez@ciemat.es

    2008-07-15

    An overview of the data transfer, processing and analysis operations conducted at the Spanish Tier-1 (PIC, Barcelona) and Tier-2 (CIEMAT-Madrid and IFCA-Santander federation) centres during the past CMS CSA06 Computing, Software and Analysis challenge and in preparation for CSA07 is present0008.

  5. US LHCNet: Transatlantic Networking for the LHC and the U.S. HEP Community

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Harvey B; Barczyk, Artur J

    2013-04-05

    US LHCNet provides the transatlantic connectivity between the Tier1 computing facilities at the Fermilab and Brookhaven National Labs and the Tier0 and Tier1 facilities at CERN, as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2, and other R&E Networks participating in the LHCONE initiative, US LHCNet also supports transatlantic connections between the Tier2 centers (where most of the data analysis is taking place) and the Tier1s as needed. Given the key roles of the US and European Tier1 centers as well as Tier2 centers on both continents, the largest data flows are across the Atlantic, where US LHCNet has the major role. US LHCNet manages and operates the transatlantic network infrastructure including four Points of Presence (PoPs) and currently six transatlantic OC-192 (10Gbps) leased links. Operating at the optical layer, the network provides a highly resilient fabric for data movement, with a target service availability level in excess of 99.95%. This level of resilience and seamless operation is achieved through careful design including path diversity on both submarine and terrestrial segments, use of carrier-grade equipment with built-in high-availability and redundancy features, deployment of robust failover mechanisms based on SONET protection schemes, as well as the design of facility-diverse paths between the LHC computing sites. The US LHCNet network provides services at Layer 1(optical), Layer 2 (Ethernet) and Layer 3 (IPv4 and IPv6). The flexible design of the network, including modular equipment, a talented and agile team, and flexible circuit lease management, allows US LHCNet to react quickly to changing requirements form the LHC community. Network capacity is provisioned just-in-time to meet the needs, as demonstrated in the past years during the changing LHC start-up plans.

  6. Developing a systems framework for sustainable infrastructure technologies (SIT) in the built environment focussing on health facilities: A case for Cape Town

    CSIR Research Space (South Africa)

    Saidi, M

    2007-05-01

    Full Text Available The objective of the study is to develop a systems framework for the implementation and management of sustainable infrastructure technologies in the built environment with specific focus on health facilities. It look at the global trends and drivers...

  7. Cost-Benefit Analysis of Green Infrastructures on Community Stormwater Reduction and Utilization: A Case of Beijing, China.

    Science.gov (United States)

    Liu, Wen; Chen, Weiping; Feng, Qi; Peng, Chi; Kang, Peng

    2016-12-01

    Cost-benefit analysis is demanded for guiding the plan, design and construction of green infrastructure practices in rapidly urbanized regions. We developed a framework to calculate the costs and benefits of different green infrastructures on stormwater reduction and utilization. A typical community of 54,783 m 2 in Beijing was selected for case study. For the four designed green infrastructure scenarios (green space depression, porous brick pavement, storage pond, and their combination), the average annual costs of green infrastructure facilities are ranged from 40.54 to 110.31 thousand yuan, and the average of the cost per m 3 stormwater reduction and utilization is 4.61 yuan. The total average annual benefits of stormwater reduction and utilization by green infrastructures of the community are ranged from 63.24 to 250.15 thousand yuan, and the benefit per m 3 stormwater reduction and utilization is ranged from 5.78 to 11.14 yuan. The average ratio of average annual benefit to cost of four green infrastructure facilities is 1.91. The integrated facilities had the highest economic feasibility with a benefit to cost ratio of 2.27, and followed by the storage pond construction with a benefit to cost ratio of 2.14. The results suggested that while the stormwater reduction and utilization by green infrastructures had higher construction and maintenance costs, their comprehensive benefits including source water replacements benefits, environmental benefits and avoided cost benefits are potentially interesting. The green infrastructure practices should be promoted for sustainable management of urban stormwater.

  8. Achieving Tier 4 Emissions in Biomass Cookstoves

    Energy Technology Data Exchange (ETDEWEB)

    Marchese, Anthony [Colorado State Univ., Fort Collins, CO (United States); DeFoort, Morgan [Colorado State Univ., Fort Collins, CO (United States); Gao, Xinfeng [Colorado State Univ., Fort Collins, CO (United States); Tryner, Jessica [Colorado State Univ., Fort Collins, CO (United States); Dryer, Frederick L. [Princeton Univ., Princeton, NJ (United States); Haas, Francis [Princeton Univ., Princeton, NJ (United States); Lorenz, Nathan [Envirofit International, Fort Collins, CO (United States)

    2018-03-13

    Previous literature on top-lit updraft (TLUD) gasifier cookstoves suggested that these stoves have the potential to be the lowest emitting biomass cookstove. However, the previous literature also demonstrated a high degree of variability in TLUD emissions and performance, and a lack of general understanding of the TLUD combustion process. The objective of this study was to improve understanding of the combustion process in TLUD cookstoves. In a TLUD, biomass is gasified and the resulting producer gas is burned in a secondary flame located just above the fuel bed. The goal of this project is to enable the design of a more robust TLUD that consistently meets Tier 4 performance targets through a better understanding of the underlying combustion physics. The project featured a combined modeling, experimental and product design/development effort comprised of four different activities: Development of a model of the gasification process in the biomass fuel bed; Development of a CFD model of the secondary combustion zone; Experiments with a modular TLUD test bed to provide information on how stove design, fuel properties, and operating mode influence performance and provide data needed to validate the fuel bed model; Planar laser-induced fluorescence (PLIF) experiments with a two-dimensional optical test bed to provide insight into the flame dynamics in the secondary combustion zone and data to validate the CFD model; Design, development and field testing of a market ready TLUD prototype. Over 180 tests of 40 different configurations of the modular TLUD test bed were performed to demonstrate how stove design, fuel properties and operating mode influences performance, and the conditions under which Tier 4 emissions are obtainable. Images of OH and acetone PLIF were collected at 10 kHz with the optical test bed. The modeling and experimental results informed the design of a TLUD prototype that met Tier 3 to Tier 4 specifications in emissions and Tier 2 in efficiency. The

  9. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  10. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  11. THE INFLUENCE OF TOURIST INFRASTRUCTURE ON THE TOURIST SATISFACTION IN OHRID

    Directory of Open Access Journals (Sweden)

    Daliborka Blazeska

    2018-06-01

    Full Text Available The purpose of the paper is to stress the importance of permanent improvement of tourism infrastructure in advancing tourism satisfaction on destination. It is empirical research of influence of tourism infrastructure on destination on tourist satisfaction in Ohrid city in R of Macedonia. Tourism infrastructure is a range of devices and institutions constituting material and organizational basis for tourism development. It comprises four basic elements: accommodation facilities, gastronomy facilities, accompanying facilities and communication facilities. Policies are needed to improve infrastructure, promote the integration of tourist services, maintain visitor numbers and encourage guests to stay longer, visit additional locations and increase their spending. Ohrid city is famous tourist destination in Republic of Macedonia. Despite historical and cultural treasures located in Ohrid, it is most famous for the Ohrid Lake. Thе city has strong attractive factors – natural and cultural monuments that attract tourist. The subject of this paper is the tourism infrastructure in Ohrid city, the current status and perspectives in order to attract more foreign and domestic tourists. Ohrid city in cooperation with government of R. Macedonia should improve permanently tourism infrastructure in destination. This paper presents an action research conducted on a sample of 200 foreign visitors in Ohrid city period of 01 July till 01 august. 2017. Tourist infrastructure has huge influence of tourist satisfaction from destination. Local municipality of Ohrid city with join efforts with the government of Republic of Macedonia should permanently develop tourist infrastructure

  12. Transportation Infrastructure Robustness : Joint Engineering and Economic Analysis

    Science.gov (United States)

    2017-11-01

    The objectives of this study are to develop a methodology for assessing the robustness of transportation infrastructure facilities and assess the effect of damage to such facilities on travel demand and the facilities users welfare. The robustness...

  13. The effect of a three-tier formulary on antidepressant utilization and expenditures.

    Science.gov (United States)

    Hodgkin, Dominic; Parks Thomas, Cindy; Simoni-Wastila, Linda; Ritter, Grant A; Lee, Sue

    2008-06-01

    Health plans in the United States are struggling to contain rapid growth in their spending on medications. They have responded by implementing multi-tiered formularies, which label certain brand medications 'non-preferred' and require higher patient copayments for those medications. This multi-tier policy relies on patients' willingness to switch medications in response to copayment differentials. The antidepressant class has certain characteristics that may pose problems for implementation of three-tier formularies, such as differences in which medication works for which patient, and high rates of medication discontinuation. To measure the effect of a three-tier formulary on antidepressant utilization and spending, including decomposing spending allocations between patient and plan. We use claims and eligibility files for a large, mature nonprofit managed care organization that started introducing its three-tier formulary on January 1, 2000, with a staggered implementation across employer groups. The sample includes 109,686 individuals who were continuously enrolled members during the study period. We use a pretest-posttest quasi-experimental design that includes a comparison group, comprising members whose employer had not adopted three-tier as of March 1, 2000. This permits some control for potentially confounding changes that could have coincided with three-tier implementation. For the antidepressants that became nonpreferred, prescriptions per enrollee decreased 11% in the three-tier group and increased 5% in the comparison group. The own-copay elasticity of demand for nonpreferred drugs can be approximated as -0.11. Difference-in-differences regression finds that the three-tier formulary slowed the growth in the probability of using antidepressants in the post-period, which was 0.3 percentage points lower than it would have been without three-tier. The three-tier formulary also increased out-of-pocket payments while reducing plan payments and total spending

  14. MFC Communications Infrastructure Study

    Energy Technology Data Exchange (ETDEWEB)

    Michael Cannon; Terry Barney; Gary Cook; George Danklefsen, Jr.; Paul Fairbourn; Susan Gihring; Lisa Stearns

    2012-01-01

    Unprecedented growth of required telecommunications services and telecommunications applications change the way the INL does business today. High speed connectivity compiled with a high demand for telephony and network services requires a robust communications infrastructure.   The current state of the MFC communication infrastructure limits growth opportunities of current and future communication infrastructure services. This limitation is largely due to equipment capacity issues, aging cabling infrastructure (external/internal fiber and copper cable) and inadequate space for telecommunication equipment. While some communication infrastructure improvements have been implemented over time projects, it has been completed without a clear overall plan and technology standard.   This document identifies critical deficiencies with the current state of the communication infrastructure in operation at the MFC facilities and provides an analysis to identify needs and deficiencies to be addressed in order to achieve target architectural standards as defined in STD-170. The intent of STD-170 is to provide a robust, flexible, long-term solution to make communications capabilities align with the INL mission and fit the various programmatic growth and expansion needs.

  15. 38 CFR 36.4318 - Servicer tier ranking-temporary procedures.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Servicer tier ranking... § 36.4318 Servicer tier ranking—temporary procedures. (a) The Secretary shall assign to each servicer a “Tier Ranking” based upon the servicer's performance in servicing guaranteed loans. There shall be four...

  16. INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)

    International Nuclear Information System (INIS)

    Arezzini, S; Carboni, A; Caruso, G; Ciampa, A; Coscetti, S; Mazzoni, E; Piras, S

    2014-01-01

    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.

  17. Operational experience with CMS Tier-2 sites

    International Nuclear Information System (INIS)

    Gonzalez Caballero, I

    2010-01-01

    In the CMS computing model, more than one third of the computing resources are located at Tier-2 sites, which are distributed across the countries in the collaboration. These sites are the primary platform for user analyses; they host datasets that are created at Tier-1 sites, and users from all CMS institutes submit analysis jobs that run on those data through grid interfaces. They are also the primary resource for the production of large simulation samples for general use in the experiment. As a result, Tier-2 sites have an interesting mix of organized experiment-controlled activities and chaotic user-controlled activities. CMS currently operates about 40 Tier-2 sites in 22 countries, making the sites a far-flung computational and social network. We describe our operational experience with the sites, touching on our achievements, the lessons learned, and the challenges for the future.

  18. Opportunity to Save Historical Railway Infrastructure - Adaptation and Functional Conversion of Facilities

    Science.gov (United States)

    Podwojewska, Magdalena

    2017-10-01

    After years of neglect and underinvestment, the Polish railways are now witnessing a rapid modernization of both their technical facilities and rolling stock. However, this is true only of the main railway lines connecting major urban complexes. It is worth pointing out that a great number of secondary lines, railway stations and halts still has not been covered by the transformation process. Railway facilities, warehouses and service features are in decay. Rapid technological developments have caused numerous architectural structures of historical interest and service features to fall out of use. There are historical railway facilities dating back to the late 19th or early 20th centuries, whose condition is constantly deteriorating. The only way to save these structures is to change the manner, in which they are being used, and attract new investors and operators. The adaptation of buildings may be carried out in a number of ways by following different strategies. The process depends on the structure’s current condition and significance for the railway network. The facilities which are disused as a result of technological changes in the rolling stock and infrastructure include workshops, steam locomotive bays, turntables and warehouses. Their size and location within a city make them a perfect place for commercial services, exhibitions, heritage sites, concerts and other events attracting great numbers of people. Other strategies may be used for constructions located next to railways lines, whose role has declined. Such constructions include small railway stations, warehouses, reloading and forwarding facilities, railway ramps, railway staff buildings as well as residences for railway employees. Railway stations located at large junctions can handle passenger traffic or freight loading operations. As well as acting as the only window to the world, railway stations in small towns housed all the services available in the place. At the same time, they served as

  19. PENERAPAN ARSITEKTUR MULTI-TIER DENGAN DCOM DALAM SUATU SISTEM INFORMASI

    Directory of Open Access Journals (Sweden)

    Kartika Gunadi

    2001-01-01

    Full Text Available Information System implementation using two-tier architecture result lack in several critical issues : reuse component, scalability, maintenance, and data security. The multi-tiered client/server architecture provides a good resolution to solve these problems that using DCOM technology . The software is made by using Delphi 4 Client/Server Suite and Microsoft SQL Server V. 7.0 as a database server software. The multi-tiered application is partitioned into thirds. The first is client application which provides presentation services. The second is server application which provides application services, and the third is database server which provides database services. This multi-tiered application software can be made in two model. They are Client/Server Windows model and Client/Server Web model with ActiveX Form Technology. In this research is found that making multi-tiered architecture with using DCOM technology can provide many benefits such as, centralized application logic in middle-tier, make thin client application, distributed load of data process in several machines, increases security with the ability in hiding data, dan fast maintenance without installing database drivers in every client. Abstract in Bahasa Indonesia : Penerapan sistem informasi menggunakan two-tier architecture mempunyai banyak kelemahan : penggunaan kembali komponen, skalabilitas, perawatan, dan keamanan data. Multi-tier Client-Server architecture mempunyai kemampuan untuk memecahkan masalah ini dengan DCOM teknologi. Perangkat lunak ini dapat dibuat menggunakan Delphi 4 Client/Server Suite dan Microsoft SQL Server 7.0 sebagai perangkat lunak database. Aplikasi program multi-tier ini dibagi menjadi tiga partisi. Pertama adalah aplikasi client menyediakan presentasi servis, kedua aplikasi server menyediakan servis aplikasi, dan ketiga aplikasi database menyediakan database servis. Perangkat lunak aplikasi multi-tier ini dapat dibuat dalam dua model, yaitu client

  20. Unified storage systems for distributed Tier-2 centres

    International Nuclear Information System (INIS)

    Cowan, G A; Stewart, G A; Elwell, A

    2008-01-01

    The start of data taking at the Large Hadron Collider will herald a new era in data volumes and distributed processing in particle physics. Data volumes of hundreds of Terabytes will be shipped to Tier-2 centres for analysis by the LHC experiments using the Worldwide LHC Computing Grid (WLCG). In many countries Tier-2 centres are distributed between a number of institutes, e.g., the geographically spread Tier-2s of GridPP in the UK. This presents a number of challenges for experiments to utilise these centres efficaciously, as CPU and storage resources may be subdivided and exposed in smaller units than the experiment would ideally want to work with. In addition, unhelpful mismatches between storage and CPU at the individual centres may be seen, which make efficient exploitation of a Tier-2's resources difficult. One method of addressing this is to unify the storage across a distributed Tier-2, presenting the centres' aggregated storage as a single system. This greatly simplifies data management for the VO, which then can access a greater amount of data across the Tier-2. However, such an approach will lead to scenarios where analysis jobs on one site's batch system must access data hosted on another site. We investigate this situation using the Glasgow and Edinburgh clusters, which are part of the ScotGrid distributed Tier-2. In particular we look at how to mitigate the problems associated with 'distant' data access and discuss the security implications of having LAN access protocols traverse the WAN between centres

  1. Nuclear Energy Infrastructure Database Fitness and Suitability Review

    Energy Technology Data Exchange (ETDEWEB)

    Heidrich, Brenden [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation (NE-4) initiated the Nuclear Energy-Infrastructure Management Project by tasking the Nuclear Science User Facilities (NSUF) to create a searchable and interactive database of all pertinent NE supported or related infrastructure. This database will be used for analyses to establish needs, redundancies, efficiencies, distributions, etc. in order to best understand the utility of NE’s infrastructure and inform the content of the infrastructure calls. The NSUF developed the database by utilizing data and policy direction from a wide variety of reports from the Department of Energy, the National Research Council, the International Atomic Energy Agency and various other federal and civilian resources. The NEID contains data on 802 R&D instruments housed in 377 facilities at 84 institutions in the US and abroad. A Database Review Panel (DRP) was formed to review and provide advice on the development, implementation and utilization of the NEID. The panel is comprised of five members with expertise in nuclear energy-associated research. It was intended that they represent the major constituencies associated with nuclear energy research: academia, industry, research reactor, national laboratory, and Department of Energy program management. The Nuclear Energy Infrastructure Database Review Panel concludes that the NSUF has succeeded in creating a capability and infrastructure database that identifies and documents the major nuclear energy research and development capabilities across the DOE complex. The effort to maintain and expand the database will be ongoing. Detailed information on many facilities must be gathered from associated institutions added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements.

  2. Design of multi-tiered database application based on CORBA component

    International Nuclear Information System (INIS)

    Sun Xiaoying; Dai Zhimin

    2003-01-01

    As computer technology quickly developing, middleware technology changed traditional two-tier database system. The multi-tiered database system, consisting of client application program, application servers and database serves, is mainly applying. While building multi-tiered database system using CORBA component has become the mainstream technique. In this paper, an example of DUV-FEL database system is presented, and then discuss the realization of multi-tiered database based on CORBA component. (authors)

  3. Breaks Are Better: A Tier II Social Behavior Intervention

    Science.gov (United States)

    Boyd, R. Justin; Anderson, Cynthia M.

    2013-01-01

    Multi-tiered systems of social behavioral support in schools provide varying levels of intervention matched to student need. Tier I (primary or universal) systems are for all students and are designed to promote pro-social behavior. Tier III (tertiary or intensive) supports are for students who engage in serious challenging behavior that has not…

  4. Importance for Municipalities of Infrastructure Information Systems in Turkey

    OpenAIRE

    Kamil KARATAS; Cemal BIYIK

    2017-01-01

    Technical infrastructures are the important development-level parameters of countries, difficult to maintain and require high-investment cost. It is required to take the advantage of information system for the better administration of technical infrastructure facilities, planning and taking effective decisions. Hence, infrastructure information systems must be built oriented to technical infrastructure (TI). In this study, Kunduracilar Street in Trabzon was selected as pilot area oriented to ...

  5. Upgrade of the cryogenic infrastructure of SM18, CERN main test facility for superconducting magnets and RF cavities

    Science.gov (United States)

    Perin, A.; Dhalla, F.; Gayet, P.; Serio, L.

    2017-12-01

    SM18 is CERN main facility for testing superconducting accelerator magnets and superconducting RF cavities. Its cryogenic infrastructure will have to be significantly upgraded in the coming years, starting in 2019, to meet the testing requirements for the LHC High Luminosity project and for the R&D program for superconducting magnets and RF equipment until 2023 and beyond. This article presents the assessment of the cryogenic needs based on the foreseen test program and on past testing experience. The current configuration of the cryogenic infrastructure is presented and several possible upgrade scenarios are discussed. The chosen upgrade configuration is then described and the characteristics of the main newly required cryogenic equipment, in particular a new 35 g/s helium liquefier, are presented. The upgrade implementation strategy and plan to meet the required schedule are then described.

  6. Nuclear Data Center International Standard Towards TSO Initiative

    International Nuclear Information System (INIS)

    Raja Murzaferi Raja Moktar; Mohd Fauzi Haris; Siti Nurbahyah Hamdan

    2011-01-01

    Nuclear Data Center is the main facility for Nuclear Malaysia Agency IT infrastructure comprising of main critical servers, research and operational data storage, HPC-clusters system and vital network core equipment. In recent years, international body such as TIA-Telecommunication Industry Association and Up time Institute have came out with proper international data center standards in order to ensure data center operation on achieving maximum operational up time and minimal downtime. The standard are currently being rated as tier level ranging from Data Center tier I up to tier IV, differentiate by facility standard and up time/ downtime percentage ratio. This paper will discuss Nuclear Data Center adopting international standards in supporting Nuclear Malaysia TSO initiative thus ensuring the critical core component of agency IT services availability and further more International standard recognitions. (author)

  7. Prevalence of syphilis infection in different tiers of female sex workers in China: implications for surveillance and interventions

    Directory of Open Access Journals (Sweden)

    Chen Xiang-Sheng

    2012-04-01

    Full Text Available Abstract Background Syphilis has made a dramatic resurgence in China during the past two decades and become the third most prevalent notifiable infectious disease in China. Female sex workers (FSWs have become one of key populations for the epidemic. In order to investigate syphilis infection among different tiers of FSWs, a cross-sectional study was conducted in 8 sites in China. Methods Serum specimens (n = 7,118 were collected to test for syphilis and questionnaire interviews were conducted to obtain socio-demographic and behavioral information among FSWs recruited from different types of venues. FSWs were categorized into three tiers (high-, middle- and low-tier FSWs based on the venues where they solicited clients. Serum specimens were screened with enzyme-linked immunosorbent assay (ELISA for treponemal antibody followed by confirmation with non-treponemal toluidine red unheated serum test (TRUST for positive ELISA specimens to determine syphilis infection. A logistic regression model was used to determine factors associated with syphilis infection. Results Overall syphilis prevalence was 5.0% (95%CI, 4.5-5.5%. Low-tier FSWs had the highest prevalence (9.7%; 95%CI, 8.3-11.1%, followed by middle-tier (4.3%; 95%CI, 3.6-5.0%, P P Conclusions This multi-site survey showed a high prevalence of syphilis infection among FSWs and substantial disparities in syphilis prevalence by the tier of FSWs. The difference in syphilis prevalence is substantial between different tiers of FSWs, with the highest rate among low-tier FSWs. Thus, current surveillance and intervention activities, which have low coverage in low-tier FSWs in China, should be further examined.

  8. 2-tiered antibody testing for early and late Lyme disease using only an immunoglobulin G blot with the addition of a VlsE band as the second-tier test.

    Science.gov (United States)

    Branda, John A; Aguero-Rosenfeld, Maria E; Ferraro, Mary Jane; Johnson, Barbara J B; Wormser, Gary P; Steere, Allen C

    2010-01-01

    Standard 2-tiered immunoglobulin G (IgG) testing has performed well in late Lyme disease (LD), but IgM testing early in the illness has been problematic. IgG VlsE antibody testing, by itself, improves early sensitivity, but may lower specificity. We studied whether elements of the 2 approaches could be combined to produce a second-tier IgG blot that performs well throughout the infection. Separate serum sets from LD patients and control subjects were tested independently at 2 medical centers using whole-cell enzyme immunoassays and IgM and IgG immunoblots, with recombinant VlsE added to the IgG blots. The results from both centers were combined, and a new second-tier IgG algorithm was developed. With standard 2-tiered IgM and IgG testing, 31% of patients with active erythema migrans (stage 1), 63% of those with acute neuroborreliosis or carditis (stage 2), and 100% of those with arthritis or late neurologic involvement (stage 3) had positive results. Using new IgG criteria, in which only the VlsE band was scored as a second-tier test among patients with early LD (stage 1 or 2) and 5 of 11 IgG bands were required in those with stage 3 LD, 34% of patients with stage 1, 96% of those with stage 2, and 100% of those with stage 3 infection had positive responses. Both new and standard testing achieved 100% specificity. Compared with standard IgM and IgG testing, the new IgG algorithm (with VlsE band) eliminates the need for IgM testing; it provides comparable or better sensitivity, and it maintains high specificity.

  9. Integration of XRootD into the cloud infrastructure for ALICE data analysis

    Science.gov (United States)

    Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey

    2015-12-01

    Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.

  10. Scientific Infrastructure To Support Manned And Unmanned Aircraft, Tethered Balloons, And Related Aerial Activities At Doe Arm Facilities On The North Slope Of Alaska

    Science.gov (United States)

    Ivey, M.; Dexheimer, D.; Hardesty, J.; Lucero, D. A.; Helsel, F.

    2015-12-01

    The U.S. Department of Energy (DOE), through its scientific user facility, the Atmospheric Radiation Measurement (ARM) facilities, provides scientific infrastructure and data to the international Arctic research community via its research sites located on the North Slope of Alaska. DOE has recently invested in improvements to facilities and infrastructure to support operations of unmanned aerial systems for science missions in the Arctic and North Slope of Alaska. A new ground facility, the Third ARM Mobile Facility, was installed at Oliktok Point Alaska in 2013. Tethered instrumented balloons were used to make measurements of clouds in the boundary layer including mixed-phase clouds. A new Special Use Airspace was granted to DOE in 2015 to support science missions in international airspace in the Arctic. Warning Area W-220 is managed by Sandia National Laboratories for DOE Office of Science/BER. W-220 was successfully used for the first time in July 2015 in conjunction with Restricted Area R-2204 and a connecting Altitude Reservation Corridor (ALTRV) to permit unmanned aircraft to operate north of Oliktok Point. Small unmanned aircraft (DataHawks) and tethered balloons were flown at Oliktok during the summer and fall of 2015. This poster will discuss how principal investigators may apply for use of these Special Use Airspaces, acquire data from the Third ARM Mobile Facility, or bring their own instrumentation for deployment at Oliktok Point, Alaska. The printed poster will include the standard DOE funding statement.

  11. 47 CFR 76.1605 - New product tier.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false New product tier. 76.1605 Section 76.1605 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Notices § 76.1605 New product tier. (a) Within 30 days of the offering of an...

  12. 26 CFR 1.444-4 - Tiered structure.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Tiered structure. 1.444-4 Section 1.444-4 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Accounting Periods § 1.444-4 Tiered structure. (a) Electing small business trusts. For...

  13. Impact of multi-tiered pharmacy benefits on attitudes of plan members with chronic disease states.

    Science.gov (United States)

    Nair, Kavita V; Ganther, Julie M; Valuck, Robert J; McCollum, Marianne M; Lewis, Sonya J

    2002-01-01

    To evaluate the effects of 2- and 3-tiered pharmacy benefit plans on member attitudes regarding their pharmacy benefits. We performed a mail survey and cross-sectional comparison of the outcome variables in a large managed care population in the western United States. Participants were persons with chronic disease states who were in 2- or 3-tier copay drug plans. A random sample of 10,662 was selected from a total of 25,008 members who had received 2 or more prescriptions for a drug commonly used to treat one of 5 conditions: hypertension, diabetes, dyslipidemia, gastroesophageal reflux disease (GERD), or arthritis. Statistical analysis included bivariate comparisons and regression analysis of the factors affecting member attitudes, including satisfaction, loyalty, health plan choices, and willingness to pay a higher out-of-pocket cost for medications. A response rate of 35.8% was obtained from continuously enrolled plan members. Respondents were older, sicker, and consumed more prescriptions than nonrespondents. There were significant differences in age and health plan characteristics between 2- and 3-tier plan members: respondents aged 65 or older represented 11.7% of 2-tier plan members and 54.7% of 3-tier plan members, and 10.0% of 2-tier plan members were in Medicare+Choice plans versus 61.4% in Medicare+Choice plans for 3-tier plan members (Pbrand-name medications, in general, they were not willing to pay more than 10 dollars (in addition to their copayment amount) for these medications. Older respondents and sicker individuals (those with higher scores on the Chronic Disease Indicator) appeared to have more positive attitudes toward their pharmacy benefit plans in general. Higher reported incomes by respondents were also associated with greater satisfaction with prescription drug coverage and increased loyalty toward the pharmacy benefit plan. Conversely, the more individuals spent for either their health care or prescription medications, the less satisfied

  14. Capturing value increase in urban redevelopment : a study of how the economic value increase in urban redevelopment can be used to finance the necessary public infrastructure and other facilities

    NARCIS (Netherlands)

    Muñoz Gielen, D.

    2010-01-01

    Everyone would agree that urban development, especially when involving the building of residential areas, should be accompanied by sufficient and good public infrastructure and facilities. We all want neighborhoods with the necessary roads, green areas, social facilities, affordable housing and

  15. Do knowledge infrastructure facilities support Evidence-Based Practice in occupational health? An exploratory study across countries among occupational physicians enrolled on Evidence-Based Medicine courses

    Directory of Open Access Journals (Sweden)

    van Dijk Frank JH

    2009-01-01

    Full Text Available Abstract Background Evidence-Based Medicine (EBM is an important method used by occupational physicians (OPs to deliver high quality health care. The presence and quality of a knowledge infrastructure is thought to influence the practice of EBM in occupational health care. This study explores the facilities in the knowledge infrastructure being used by OPs in different countries, and their perceived importance for EBM practice. Methods Thirty-six OPs from ten countries, planning to attend an EBM course and to a large extent recruited via the European Association of Schools of Occupational Medicine (EASOM, participated in a cross-sectional study. Results Research and development institutes, and knowledge products and tools are used by respectively more than 72% and more than 80% of the OPs and they are rated as being important for EBM practice (more than 65 points (range 0–100. Conventional knowledge access facilities, like traditional libraries, are used often (69% but are rated as less important (46.8 points (range 0–100 compared to the use of more novel facilities, like question-and-answer facilities (25% that are rated as more important (48.9 points (range 0–100. To solve cases, OPs mostly use non evidence-based sources. However, they regard the evidence-based sources that are not often used, e.g. the Cochrane library, as important enablers for practising EBM. The main barriers are lack of time, payment for full-text articles, language barrier (most texts are in English, and lack of skills and support. Conclusion This first exploratory study shows that OPs use many knowledge infrastructure facilities and rate them as being important for their EBM practice. However, they are not used to use evidence-based sources in their practice and face many barriers that are comparable to the barriers physicians face in primary health care.

  16. Using the National Information Infrastructure for social science, education, and informed decision making

    Energy Technology Data Exchange (ETDEWEB)

    Tonn, B.E.

    1994-01-07

    The United States has aggressively embarked on the challenging task of building a National Information Infrastructure (NII). This infrastructure will have many levels, extending from the building block capital stock that composes the telecommunications system to the multitude of higher tier applications hardware and software tied to this system. This ``White Paper`` presents a vision for a second and third tier national information infrastructure that focuses exclusively on the needs of social science, education, and decision making (NII-SSEDM). NII-SSEDM will provide the necessary data, information, and automated decision support and educational tools needed to help this nation solve its most pressing social problems. The proposed system has five components: `data collection systems; databases; statistical analysis and modeling tools; policy analysis and decision support tools; and materials and software specially designed for education. This paper contains: a vision statement for each component; comments on progress made on each component as of the early 1990s; and specific recommendations on how to achieve the goals described in the vision statements. The white paper also discusses how the NII-SSEDM could be used to address four major social concerns: ensuring economic prosperity; health care; reducing crime and violence; and K-12 education. Examples of near-term and mid-term goals (e.g., pre-and post Year 2000) are presented for consideration. Although the development of NII-SSEDM will require a concerted effort by government, the private sector, schools, and numerous other organizations, the success of NH-SSEDM is predicated upon the identification of an institutional ``champion`` to acquire and husband key resources and provide strong leadership and guidance.

  17. 75 FR 73166 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2010-11-29

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY: Internal Revenue Service, Treasury. ACTION: Notice. SUMMARY: Publication of the tier 2 tax rates for...). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding...

  18. Comparison of tiered formularies and reference pricing policies: a systematic review.

    Science.gov (United States)

    Morgan, Steve; Hanley, Gillian; Greyson, Devon

    2009-01-01

    To synthesize methodologically comparable evidence from the published literature regarding the outcomes of tiered formularies and therapeutic reference pricing of prescription drugs. We searched the following electronic databases: ABI/Inform, CINAHL, Clinical Evidence, Digital Dissertations & Theses, Evidence-Based Medicine Reviews (which incorporates ACP Journal Club, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Cochrane Methodology Register, Database of Abstracts of Reviews of Effectiveness, Health Technology Assessments and NHS Economic Evaluation Database), EconLit, EMBASE, International Pharmaceutical Abstracts, MEDLINE, PAIS International and PAIS Archive, and the Web of Science. We also searched the reference lists of relevant articles and several grey literature sources. We sought English-language studies published from 1986 to 2007 that examined the effects of either therapeutic reference pricing or tiered formularies, reported on outcomes relevant to patient care and cost-effectiveness, and employed quantitative study designs that included concurrent or historical comparison groups. We abstracted and assessed potentially appropriate articles using a modified version of the data abstraction form developed by the Cochrane Effective Practice and Organisation of Care Group. From an initial list of 2964 citations, 12 citations (representing 11 studies) were deemed eligible for inclusion in our review: 3 studies (reported in 4 articles) of reference pricing and 8 studies of tiered formularies. The introduction of reference pricing was associated with reduced plan spending, switching to preferred medicines, reduced overall drug utilization and short-term increases in the use of physician services. Reference pricing was not associated with adverse health impacts. The introduction of tiered formularies was associated with reduced plan expenditures, greater patient costs and increased rates of non-compliance with

  19. Irradiation facilities in JRR-3M

    International Nuclear Information System (INIS)

    Ohtomo, Akitoshi; Sigemoto, Masamitsu; Takahashi, Hidetake

    1992-01-01

    Irradiation facilities have been installed in the upgraded JRR-3 (JRR-3M) in Japan Atomic Energy Research Institute (JAERI). There are hydraulic rabbit facilities (HR), pneumatic rabbit facilities (PN), neutron activation analysis facility (PN3), uniform irradiation facility (SI), rotating irradiation facility and capsule irradiation facilities to carry out the neutron irradiation in the JRR-3M. These facilities are operated using a process control computer system to centerize the process information. Some of the characteristics for the facilities were satisfactorily measured at the same time of reactor performance test in 1990. During reactor operation, some of the tests are continued to confirm the basic characteristics on facilities, for example, PN3 was confirmed to have enough performance for activation analysis. Measurement of neutron flux at all irradiation positions has been carried out for the equilibrium core. (author)

  20. 76 FR 71623 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2011-11-18

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice. SUMMARY: Publication of the tier 2 tax rates for...). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding...

  1. 78 FR 71039 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2013-11-27

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice. SUMMARY: Publication of the tier 2 tax rates for...). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding...

  2. 9 CFR 3.27 - Facilities, outdoor.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Facilities, outdoor. 3.27 Section 3.27... Pigs and Hamsters Facilities and Operating Standards § 3.27 Facilities, outdoor. (a) Hamsters shall not be housed in outdoor facilities. (b) Guinea pigs shall not be housed in outdoor facilities unless...

  3. JRR-3 neutron radiography facility

    International Nuclear Information System (INIS)

    Matsubayashi, M.; Tsuruno, A.

    1992-01-01

    JRR-3 neutron radiography facility consists of thermal neutron radiography facility (TNRF) and cold neutron radiography facility (CNRF). TNRF is installed in JRR-3 reactor building. CNRF is installed in the experimental beam hall adjacent to the reactor building. (author)

  4. Professional Development to Differentiate Kindergarten Tier 1 Instruction: Can Already Effective Teachers Improve Student Outcomes by Differentiating Tier 1 Instruction?

    Science.gov (United States)

    Al Otaiba, Stephanie; Folsom, Jessica S.; Wanzek, Jeanne; Greulich, Luana; Waesche, Jessica; Schatschneider, Christopher; Connor, Carol M.

    2016-01-01

    Two primary purposes guided this quasi-experimental within-teacher study: (a) to examine changes from baseline through 2 years of professional development (Individualizing Student Instruction) in kindergarten teachers' differentiation of Tier 1 literacy instruction; and (b) to examine changes in reading and vocabulary of 3 cohorts of the teachers'…

  5. Energy Aware Pricing in a Three-Tiered Cloud Service Market

    Directory of Open Access Journals (Sweden)

    Debdeep Paul

    2016-09-01

    Full Text Available We consider a three-tiered cloud service market and propose an energy efficient pricing strategy in this market. Here, the end customers are served by the Software-as-a-Service (SaaS providers, who implement customized services for their customers. To host these services, these SaaS providers, in turn, lease the infrastructure related resources from the Infrastructure-as-a-Service (IaaS or Platform-as-a-Service (PaaS providers. In this paper, we propose and evaluate a mechanism for pricing between SaaS providers and Iaas/PaaS providers and between SaaS providers and the end customers. The pricing scheme is designed in a way such that the integration of renewable energy is promoted, which is a very crucial aspect of energy efficiency. Thereafter, we propose a technique to strategically provide an improved Quality of Service (QoS by deploying more resources than what is computed by the optimization procedure. This technique is based on the square root staffing law in queueing theory. We carry out numerical evaluations with real data traces on electricity price, renewable energy generation, workload, etc., in order to emulate the real dynamics of the cloud service market. We demonstrate that, under practical assumptions, the proposed technique can generate more profit for the service providers operating in the cloud service market.

  6. SOCIAL INFRASTRUCTURE MODERNIZATION AS A PRIORITY REGARDING RURAL LIFE STANDARD IMPROVEMENT

    Directory of Open Access Journals (Sweden)

    L. A. Tretyakova

    2010-06-01

    Full Text Available At the present stage of socio-economic changes rural area economic activity conditions have changed in Russia, which has significantly worsened social facilities and engineering infrastructure effective functioning problem. The rural social infrastructure status has been recently deteriorating due to the lack of effective State support instruments and investments. In this paper, Russian rural social sphere development trends are considered, guidelines referred to the government control of rural area social sphere development are analyzed, methodology related to social facilities and engineering infrastructure efficient functioning is suggested as a determining factor for the agriculture labor market efficient development. A conceptual model of rural area social infrastructure strategic development and a mechanism of management control organization and rural area social infrastructure development based on a comprehensive analysis are suggested.

  7. 75 FR 39437 - Optimizing the Security of Biological Select Agents and Toxins in the United States

    Science.gov (United States)

    2010-07-08

    ... agents and toxins with the potential to pose a severe threat to public health and safety, animal and... mass casualties or devastating effects to the economy, critical infrastructure, or public confidence... establishment of appropriate practices for physical security and cyber security for facilities that possess Tier...

  8. Sustainable infrastructure system modeling under uncertainties and dynamics

    Science.gov (United States)

    Huang, Yongxi

    Infrastructure systems support human activities in transportation, communication, water use, and energy supply. The dissertation research focuses on critical transportation infrastructure and renewable energy infrastructure systems. The goal of the research efforts is to improve the sustainability of the infrastructure systems, with an emphasis on economic viability, system reliability and robustness, and environmental impacts. The research efforts in critical transportation infrastructure concern the development of strategic robust resource allocation strategies in an uncertain decision-making environment, considering both uncertain service availability and accessibility. The study explores the performances of different modeling approaches (i.e., deterministic, stochastic programming, and robust optimization) to reflect various risk preferences. The models are evaluated in a case study of Singapore and results demonstrate that stochastic modeling methods in general offers more robust allocation strategies compared to deterministic approaches in achieving high coverage to critical infrastructures under risks. This general modeling framework can be applied to other emergency service applications, such as, locating medical emergency services. The development of renewable energy infrastructure system development aims to answer the following key research questions: (1) is the renewable energy an economically viable solution? (2) what are the energy distribution and infrastructure system requirements to support such energy supply systems in hedging against potential risks? (3) how does the energy system adapt the dynamics from evolving technology and societal needs in the transition into a renewable energy based society? The study of Renewable Energy System Planning with Risk Management incorporates risk management into its strategic planning of the supply chains. The physical design and operational management are integrated as a whole in seeking mitigations against the

  9. A knowledge infrastructure for occupational safety and health.

    Science.gov (United States)

    van Dijk, Frank J H; Verbeek, Jos H; Hoving, Jan L; Hulshof, Carel T J

    2010-12-01

    Occupational Safety and Health (OSH) professionals should use scientific evidence to support their decisions in policy and practice. Although examples from practice show that progress has been made in evidence-based decision making, there is a challenge to improve and extend the facilities that support knowledge translation in practice. A knowledge infrastructure that supports OSH practice should include scientific research, systematic reviews, practice guidelines, and other tools for professionals such as well accessible virtual libraries and databases providing knowledge, quality tools, and good learning materials. A good infrastructure connects facilities with each other and with practice. Training and education is needed for OSH professionals in the use of evidence to improve effectiveness and efficiency. New initiatives show that occupational health can profit from intensified international collaboration to establish a good functioning knowledge infrastructure.

  10. 9 CFR 3.102 - Facilities, indoor.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Facilities, indoor. 3.102 Section 3... Marine Mammals Facilities and Operating Standards § 3.102 Facilities, indoor. (a) Ambient temperature. The air and water temperatures in indoor facilities shall be sufficiently regulated by heating or...

  11. National software infrastructure for lattice gauge theory

    International Nuclear Information System (INIS)

    Brower, Richard C

    2005-01-01

    The current status of the SciDAC software infrastructure project for lattice gauge theory is summarized. This includes the the design of a QCD application programmers interface (API) that allows existing and future codes to be run efficiently on Terascale hardware facilities and to be rapidly ported to new dedicated or commercial platforms. The critical components of the API have been implemented and are in use on the US QCDOC hardware at BNL and on both the switched and mesh architecture Pentium 4 clusters at Fermi National Accelerator Laboratory (FNAL) and Thomas Jefferson National Accelerator Facility (JLab). Future software infrastructure requirements and research directions are also discussed

  12. The Impact of Payment System Design on Tiering Incentives

    OpenAIRE

    Robert Arculus; Jennifer Hancock; Greg Moran

    2012-01-01

    Tiering occurs when an institution does not participate directly in the central payment system but instead settles its payments through an agent. A high level of tiering can be a significant issue for payment system regulators because of the increased credit and concentration risk. This paper explores the impact of payment system design on institutions' incentives to tier using simulation analysis. Some evidence is found to support the hypothesis that the liquidity-saving mechanisms in Austra...

  13. NHERI: Advancing the Research Infrastructure of the Multi-Hazard Community

    Science.gov (United States)

    Blain, C. A.; Ramirez, J. A.; Bobet, A.; Browning, J.; Edge, B.; Holmes, W.; Johnson, D.; Robertson, I.; Smith, T.; Zuo, D.

    2017-12-01

    The Natural Hazards Engineering Research Infrastructure (NHERI), supported by the National Science Foundation (NSF), is a distributed, multi-user national facility that provides the natural hazards research community with access to an advanced research infrastructure. Components of NHERI are comprised of a Network Coordination Office (NCO), a cloud-based cyberinfrastructure (DesignSafe-CI), a computational modeling and simulation center (SimCenter), and eight Experimental Facilities (EFs), including a post-disaster, rapid response research facility (RAPID). Utimately NHERI enables researchers to explore and test ground-breaking concepts to protect homes, businesses and infrastructure lifelines from earthquakes, windstorms, tsunamis, and surge enabling innovations to help prevent natural hazards from becoming societal disasters. When coupled with education and community outreach, NHERI will facilitate research and educational advances that contribute knowledge and innovation toward improving the resiliency of the nation's civil infrastructure to withstand natural hazards. The unique capabilities and coordinating activities over Year 1 between NHERI's DesignSafe-CI, the SimCenter, and individual EFs will be presented. Basic descriptions of each component are also found at https://www.designsafe-ci.org/facilities/. Additionally to be discussed are the various roles of the NCO in leading development of a 5-year multi-hazard science plan, coordinating facility scheduling and fostering the sharing of technical knowledge and best practices, leading education and outreach programs such as the recent Summer Institute and multi-facility REU program, ensuring a platform for technology transfer to practicing engineers, and developing strategic national and international partnerships to support a diverse multi-hazard research and user community.

  14. 9 CFR 3.25 - Facilities, general.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Facilities, general. 3.25 Section 3.25... Pigs and Hamsters Facilities and Operating Standards § 3.25 Facilities, general. (a) Structural strength. Indoor and outdoor housing facilities for guinea pigs or hamsters shall be structurally sound and...

  15. Acute tier-1 and tier-2 effect assessment approaches in the EFSA Aquatic Guidance Diocument: are they sufficiently protective for insecticides?

    NARCIS (Netherlands)

    Wijngaarden, van R.P.A.; Maltby, L.; Brock, T.C.M.

    2015-01-01

    BACKGROUND The objective of this paper is to evaluate whether the acute tier-1 and tier-2 methods as proposed by the Aquatic Guidance Document recently published by the European Food Safety Authority (EFSA) are appropriate for deriving regulatory acceptable concentrations (RACs) for insecticides.

  16. A four-tier problem-solving scaffold to teach pain management in dental school.

    Science.gov (United States)

    Ivanoff, Chris S; Hottel, Timothy L

    2013-06-01

    Pain constitutes a major reason patients pursue dental treatment. This article presents a novel curriculum to provide dental students comprehensive training in the management of pain. The curriculum's four-tier scaffold combines traditional and problem-based learning to improve students' diagnostic, pharmacotherapeutic, and assessment skills to optimize decision making when treating pain. Tier 1 provides underpinning knowledge of pain mechanisms with traditional and contextualized instruction by integrating clinical correlations and studying worked cases that stimulate clinical thinking. Tier 2 develops critical decision making skills through self-directed learning and actively solving problem-based cases. Tier 3 exposes students to management approaches taken in allied health fields and cultivates interdisciplinary communication skills. Tier 4 provides a "knowledge and experience synthesis" by rotating students through community pain clinics to practice their assessment skills. This combined teaching approach aims to increase critical thinking and problem-solving skills to assist dental graduates in better management of pain throughout their careers. Dental curricula that have moved to comprehensive care/private practice models are well-suited for this educational approach. The goal of this article is to encourage dental schools to integrate pain management into their curricula, to develop pain management curriculum resources for dental students, and to provide leadership for change in pain management education.

  17. Monitoring and optimization of ATLAS Tier 2 center GoeGrid

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219638; Quadt, Arnulf; Yahyapour, Ramin

    The demand on computational and storage resources is growing along with the amount of information that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid performance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and serviceability of the computing platform can be measured according to the const...

  18. Investigating 3S Synergies to Support Infrastructure Development and Risk-Informed Methodologies for 3S by Design

    International Nuclear Information System (INIS)

    Suzuki, M.; Izumi, Y.; Kimoto, T.; Naoi, Y.; Inoue, T.; Hoffheins, B.

    2010-01-01

    In 2008, Japan and other G8 countries pledged to support the Safeguards, Safety, and Security (3S) Initiative to raise awareness of 3S worldwide and to assist countries in setting up nuclear energy infrastructures that are essential cornerstones of a successful nuclear energy program. The goals of the 3S initiative are to ensure that countries already using nuclear energy or those planning to use nuclear energy are supported by strong national programs in safety, security, and safeguards not only for reliability and viability of the programs, but also to prove to the international audience that the programs are purely peaceful and that nuclear material is properly handled, accounted for, and protected. In support of this initiative, Japan Atomic Energy Agency (JAEA) has been conducting detailed analyses of the R and D programs and cultures of each of the 'S' areas to identify overlaps where synergism and efficiencies might be realized, to determine where there are gaps in the development of a mature 3S culture, and to coordinate efforts with other Japanese and international organizations. As an initial outcome of this study, incoming JAEA employees are being introduced to 3S as part of their induction training and the idea of a President's Award program is being evaluated. Furthermore, some overlaps in 3S missions might be exploited to share facility instrumentation as with Joint-Use-Equipment (JUE), in which cameras and radiation detectors, are shared by the State and IAEA. Lessons learned in these activities can be applied to developing more efficient and effective 3S infrastructures for incorporating into Safeguards by Design methodologies. They will also be useful in supporting human resources and technology development projects associated with Japan's planned nuclear security center for Asia, which was announced during the 2010 Nuclear Security Summit. In this presentation, a risk-informed approach regarding integration of 3S will be introduced. An initial

  19. Concerning problems of petroleum refining facilities. ; Promote international lateral work sharing, and strengthening of infrastructures in petroleum industry. Sekiyu seisei setsubi mondai ni tsuite. ; Kokusei suihei bungyo no suishin to sekiyu sangyo no kiban kyoka wo

    Energy Technology Data Exchange (ETDEWEB)

    1991-06-05

    This paper discusses how to promote international lateral work sharing and how to strengthen infrastructures in the petroleum industry, as a problem prevailing over the petroleum refining facilities in Japan. Excess distillation facilities have been applied with the disposition policy. However, in view of the supply and demand situation in petroleum products for medium to long term span in the world with the pan-Pacific region as the main concern, that applicable to Japan, and that experienced during the Persian Gulf crisis, the excess facility disposition policy was revised, particularly on white kerosene, of which supply and demand tightness is concerned about, so that production capacities may be increased as required. Japan, a large presence in the international economics, is required to work more positively on petroleum refining facilities located in the oil producing countries and intermediate locations to promote the international lateral work sharing. On the one hand, in order to strengthen the infrastructures in the Japanese petroleum industry, it is necessary to promote rationalization and use at higher efficiency of the oil supply system, and convergence of the the petroleum industry, including joint investments for projects exceeding capabilities of individual enterprises. 3 tabs.

  20. "Atmospheric Radiation Measurement (ARM) Research Facility at Oliktok Point Alaska"

    Science.gov (United States)

    Helsel, F.; Ivey, M.; Hardesty, J.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    Scientific Infrastructure To Support Atmospheric Science, Aerosol Science and UAS's for The Department Of Energy's Atmospheric Radiation Measurement Programs At The Mobile Facility 3 Located At Oliktok Point, Alaska.The Atmospheric Radiation Measurement (ARM) Program's Mobile Facility 3 (AMF3) located at Oliktok Point, Alaska is a U.S. Department of Energy (DOE) site designed to collect data and help determine the impact that clouds and aerosols have on solar radiation. AMF3 provides a scientific infrastructure to support instruments and collect arctic data for the international arctic research community. The infrastructure at AMF3/Oliktok is designed to be mobile and it may be relocated in the future to support other ARM science missions. AMF3's present base line instruments include: scanning precipitation Radars, cloud Radar, Raman Lidar, Eddy correlation flux systems, Ceilometer, Balloon sounding system, Atmospheric Emitted Radiance Interferometer (AERI), Micro-pulse Lidar (MPL) Along with all the standard metrological measurements. In addition AMF3 provides aerosol measurements with a Mobile Aerosol Observing System (MAOS). Ground support for Unmanned Aerial Systems (UAS) and tethered balloon flights. Data from these instruments and systems are placed in the ARM data archives and are available to the international research community. This poster will discuss what instruments and systems are at the ARM Research Facility at Oliktok Point Alaska.

  1. Potential for sharing nuclear power infrastructure between countries

    International Nuclear Information System (INIS)

    2006-10-01

    The introduction or expansion of a nuclear power programme in a country and its successful execution is largely dependent on the network of national infrastructure, covering a wide range of activities and capabilities. The infrastructure areas include legal framework, safety and environmental regulatory bodies, international agreements, physical facilities, finance, education, training, human resources and public information and acceptance. The wide extent of infrastructure needs require an investment that can be too large or onerous for the national economy. The burden of infrastructure can be reduced significantly if a country forms a sharing partnership with other countries. The sharing can be at regional or at multinational level. It can include physical facilities, common programmes and knowledge, which will reflect in economic benefits. The sharing can also contribute in a significant manner to harmonization of codes and standards in general and regulatory framework in particular. The opportunities and potential of sharing nuclear power infrastructure is determined by the objectives, strategy and scenario of the national nuclear power programme. A review of individual infrastructure items shows that there are several opportunities for sharing of nuclear power infrastructure between countries if they cooperate with each other. International cooperation and sharing of nuclear power infrastructure are not new. This publication provides criteria and guidance for analyzing and identifying the potential for sharing of nuclear power infrastructure during the stages of nuclear power project life cycle. The target users are decision makers, advisers and senior managers in utilities, industrial organizations, regulatory bodies and governmental organizations in countries adopting or extending nuclear power programmes. This publication was produced within the IAEA programme directed to increase the capability of Member States to plan and implement nuclear power

  2. Advancing Methods for Estimating Soil Nitrous Oxide Emissions by Incorporating Freeze-Thaw Cycles into a Tier 3 Model-Based Assessment

    Science.gov (United States)

    Ogle, S. M.; DelGrosso, S.; Parton, W. J.

    2017-12-01

    Soil nitrous oxide emissions from agricultural management are a key source of greenhouse gas emissions in many countries due to the widespread use of nitrogen fertilizers, manure amendments from livestock production, planting legumes and other practices that affect N dynamics in soils. In the United States, soil nitrous oxide emissions have ranged from 250 to 280 Tg CO2 equivalent from 1990 to 2015, with uncertainties around 20-30 percent. A Tier 3 method has been used to estimate the emissions with the DayCent ecosystem model. While the Tier 3 approach is considerably more accurate than IPCC Tier 1 methods, there is still the possibility of biases in emission estimates if there are processes and drivers that are not represented in the modeling framework. Furthermore, a key principle of IPCC guidance is that inventory compilers estimate emissions as accurately as possible. Freeze-thaw cycles and associated hot moments of nitrous oxide emissions are one of key drivers influencing emissions in colder climates, such as the cold temperate climates of the upper Midwest and New England regions of the United States. Freeze-thaw activity interacts with management practices that are increasing N availability in the plant-soil system, leading to greater nitrous oxide emissions during transition periods from winter to spring. Given the importance of this driver, the DayCent model has been revised to incorproate freeze-thaw cycles, and the results suggests that including this driver can significantly modify the emissions estimates in cold temperate climate regions. Consequently, future methodological development to improve estimation of nitrous oxide emissions from soils would benefit from incorporating freeze-thaw cycles into the modeling framework for national territories with a cold climate.

  3. Which Tier? Effects of Linear Assessment and Student Characteristics on GCSE Entry Decisions

    Science.gov (United States)

    Vitello, Sylvia; Crawford, Cara

    2018-01-01

    In England, students obtain General Certificate of Secondary Education (GCSE) qualifications, typically at age 16. Certain GCSEs are tiered; students take either higher-level (higher tier) or lower-level (foundation tier) exams, which may have different educational, career and psychological consequences. In particular, foundation tier entry, if…

  4. Improving CMS data transfers among its distributed computing facilities

    International Nuclear Information System (INIS)

    Flix, J; Magini, N; Sartirana, A

    2011-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  5. 9 CFR 3.117 - Terminal facilities.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Terminal facilities. 3.117 Section 3... Marine Mammals Transportation Standards § 3.117 Terminal facilities. Carriers and intermediate handlers... facility of any carrier or intermediate handler where marine mammal shipments are maintained must be...

  6. Tenet: An Architecture for Tiered Embedded Networks

    OpenAIRE

    Ramesh Govindan; Eddie Kohler; Deborah Estrin; Fang Bian; Krishna Chintalapudi; Om Gnawali; Sumit Rangwala; Ramakrishna Gummadi; Thanos Stathopoulos

    2005-01-01

    Future large-scale sensor network deployments will be tiered, with the motes providing dense sensing and a higher tier of 32-bit master nodes with more powerful radios providing increased overall network capacity. In this paper, we describe a functional architecture for wireless sensor networks that leverages this structure to simplify the overall system. Our Tenet architecture has the nice property that the mote-layer software is generic and reusable, and all application functionality reside...

  7. The Safety and Tritium Applied Research (STAR) Facility: Status-2004

    International Nuclear Information System (INIS)

    Anderl, R.A.; Longhurst, G.R.; Pawelko, R.J.; Sharpe, J.P.; Schuetz, S.T.; Petti, D.A.

    2005-01-01

    The Safety and Tritium Applied Research (STAR) Facility, a US DOE National User Facility at the Idaho National Engineering and Environmental Laboratory (INEEL), comprises capabilities and infrastructure to support both tritium and non-tritium research activities important to the development of safe and environmentally friendly fusion energy. Research thrusts include (1) interactions of tritium and deuterium with plasma-facing-component (PFC) materials, (2) fusion safety issues [PFC material chemical reactivity and dust/debris generation, activation product mobilization, tritium behavior in fusion systems], and (3) molten salts and fusion liquids for tritium breeder and coolant applications. This paper updates the status of STAR and the capabilities for ongoing research activities, with an emphasis on the development, testing and integration of the infrastructure to support tritium research activities. Key elements of this infrastructure include a tritium storage and assay system, a tritium cleanup system to process glovebox and experiment tritiated effluent gases, and facility tritium monitoring systems

  8. The Impacts of Thawing Permafrost and Climate Change on USAF Infrastructure Within Northern Tier Bases

    Science.gov (United States)

    Graboski, A. J.

    2016-12-01

    The Department of Defense (DoD) is planning over $600M in military construction on Eielson Air Force Base (AFB) within the next three fiscal years. Although many studies have been conducted on permafrost and climate change, the future of our climate as well as any impacts on arctic infrastructure, remains unclear. This research focused on future climate predictions to determine likely scenarios for the United States Air Force's Strategic Planners to consider. This research also looked at various construction methods being used by industry to glean best practices to incorporate into future construction in order to determine cost factors to consider when permafrost soils may be encountered. The most recent 2013 International Panel on Climate Change (IPCC) report predicts a 2.2ºC to 7.8ºC temperature rise in Arctic regions by the end of the 21st Century in the Representative Concentration Pathways, (RCP4.5) emissions scenario. A regression model was created using archived surface observations from 1944 to 2016. Initial analysis using regression/forecast techniques show a 1.17ºC temperature increase in the Arctic by the end of the 21st Century. Historical DoD construction data was then used to determine an appropriate cost factor. Applying statistical tests to the adjusted climate predictions supports continued usage of current DoD cost factors of 2.13 at Eielson and 2.97 at Thule AFBs as they should be sufficient when planning future construction projects in permafrost rich areas. These cost factors should allow planners the necessary funds to plan foundation mitigation techniques and prevent further degradation of permafrost soils around airbase infrastructure. This current research focused on Central Alaska while further research is recommended on the Alaskan North Slope and Greenland to determine climate change impacts on critical DoD infrastructure.

  9. Geographic Hotspots of Critical National Infrastructure.

    Science.gov (United States)

    Thacker, Scott; Barr, Stuart; Pant, Raghav; Hall, Jim W; Alderson, David

    2017-12-01

    Failure of critical national infrastructures can result in major disruptions to society and the economy. Understanding the criticality of individual assets and the geographic areas in which they are located is essential for targeting investments to reduce risks and enhance system resilience. Within this study we provide new insights into the criticality of real-life critical infrastructure networks by integrating high-resolution data on infrastructure location, connectivity, interdependence, and usage. We propose a metric of infrastructure criticality in terms of the number of users who may be directly or indirectly disrupted by the failure of physically interdependent infrastructures. Kernel density estimation is used to integrate spatially discrete criticality values associated with individual infrastructure assets, producing a continuous surface from which statistically significant infrastructure criticality hotspots are identified. We develop a comprehensive and unique national-scale demonstration for England and Wales that utilizes previously unavailable data from the energy, transport, water, waste, and digital communications sectors. The testing of 200,000 failure scenarios identifies that hotspots are typically located around the periphery of urban areas where there are large facilities upon which many users depend or where several critical infrastructures are concentrated in one location. © 2017 Society for Risk Analysis.

  10. The architecture and operation of the CMS Tier-0

    International Nuclear Information System (INIS)

    Hufnagel, Dirk

    2011-01-01

    The Tier-0 processing system is the initial stage of the multi-tiered computing system of CMS. It takes care of the first processing steps of data at the LHC at CERN. The automated workflows running in the Tier-0 contain both low-latency processing chains for time-critical applications and bulk chains to archive the recorded data offsite the host laboratory. It is a mix between an online and offline system, because the data the CMS DAQ writes out initially is of a temporary nature. Most of the complexity in the design of this system comes from this unique combination of online and offline use cases and dependencies. In this talk, we want to present the software design of the CMS Tier-0 system and present an analysis of the 24/7 operation of the system in the 2009/2010 data taking periods.

  11. 25 CFR 542.20 - What is a Tier A gaming operation?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What is a Tier A gaming operation? 542.20 Section 542.20 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.20 What is a Tier A gaming operation? A Tier A gaming operation is one with annual...

  12. 25 CFR 542.30 - What is a Tier B gaming operation?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What is a Tier B gaming operation? 542.30 Section 542.30 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.30 What is a Tier B gaming operation? A Tier B gaming operation is one with gross...

  13. 25 CFR 542.40 - What is a Tier C gaming operation?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What is a Tier C gaming operation? 542.40 Section 542.40 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.40 What is a Tier C gaming operation? A Tier C gaming operation is one with annual...

  14. 9 CFR 3.91 - Terminal facilities.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Terminal facilities. 3.91 Section 3.91... Primates 2 Transportation Standards § 3.91 Terminal facilities. (a) Placement. Any persons subject to the... with inanimate cargo or with other animals in animal holding areas of terminal facilities. Nonhuman...

  15. 9 CFR 3.18 - Terminal facilities.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Terminal facilities. 3.18 Section 3.18... Cats 1 Transportation Standards § 3.18 Terminal facilities. (a) Placement. Any person subject to the... inanimate cargo in animal holding areas of terminal facilities. (b) Cleaning, sanitization, and pest control...

  16. Cleavage-Independent HIV-1 Trimers From CHO Cell Lines Elicit Robust Autologous Tier 2 Neutralizing Antibodies

    Directory of Open Access Journals (Sweden)

    Shridhar Bale

    2018-05-01

    Full Text Available Native flexibly linked (NFL HIV-1 envelope glycoprotein (Env trimers are cleavage-independent and display a native-like, well-folded conformation that preferentially displays broadly neutralizing determinants. The NFL platform simplifies large-scale production of Env by eliminating the need to co-transfect the precursor-cleaving protease, furin that is required by the cleavage-dependent SOSIP trimers. Here, we report the development of a CHO-M cell line that expressed BG505 NFL trimers at a high level of homogeneity and yields of ~1.8 g/l. BG505 NFL trimers purified by single-step lectin-affinity chromatography displayed a native-like closed structure, efficient recognition by trimer-preferring bNAbs, no recognition by non-neutralizing CD4 binding site-directed and V3-directed antibodies, long-term stability, and proper N-glycan processing. Following negative-selection, formulation in ISCOMATRIX adjuvant and inoculation into rabbits, the trimers rapidly elicited potent autologous tier 2 neutralizing antibodies. These antibodies targeted the N-glycan “hole” naturally present on the BG505 Env proximal to residues at positions 230, 241, and 289. The BG505 NFL trimers that did not expose V3 in vitro, elicited low-to-no tier 1 virus neutralization in vivo, indicating that they remained intact during the immunization process, not exposing V3. In addition, BG505 NFL and BG505 SOSIP trimers expressed from 293F cells, when formulated in Adjuplex adjuvant, elicited equivalent BG505 tier 2 autologous neutralizing titers. These titers were lower in potency when compared to the titers elicited by CHO-M cell derived trimers. In addition, increased neutralization of tier 1 viruses was detected. Taken together, these data indicate that both adjuvant and cell-type expression can affect the elicitation of tier 2 and tier 1 neutralizing responses in vivo.

  17. 9 CFR 3.40 - Terminal facilities.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Terminal facilities. 3.40 Section 3.40... Pigs and Hamsters Transportation Standards § 3.40 Terminal facilities. No person subject to the Animal... animal holding areas of a terminal facility where shipments of live guinea pigs or hamsters are...

  18. 9 CFR 3.65 - Terminal facilities.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Terminal facilities. 3.65 Section 3.65... Transportation Standards § 3.65 Terminal facilities. No person subject to the Animal Welfare regulations shall commingle shipments of live rabbits with inanimate cargo. All animal holding areas of a terminal facility...

  19. The EPOS e-Infrastructure

    Science.gov (United States)

    Jeffery, Keith; Bailo, Daniele

    2014-05-01

    The European Plate Observing System (EPOS) is integrating geoscientific information concerning earth movements in Europe. We are approaching the end of the PP (Preparatory Project) phase and in October 2014 expect to continue with the full project within ESFRI (European Strategic Framework for Research Infrastructures). The key aspects of EPOS concern providing services to allow homogeneous access by end-users over heterogeneous data, software, facilities, equipment and services. The e-infrastructure of EPOS is the heart of the project since it integrates the work on organisational, legal, economic and scientific aspects. Following the creation of an inventory of relevant organisations, persons, facilities, equipment, services, datasets and software (RIDE) the scale of integration required became apparent. The EPOS e-infrastructure architecture has been developed systematically based on recorded primary (user) requirements and secondary (interoperation with other systems) requirements through Strawman, Woodman and Ironman phases with the specification - and developed confirmatory prototypes - becoming more precise and progressively moving from paper to implemented system. The EPOS architecture is based on global core services (Integrated Core Services - ICS) which access thematic nodes (domain-specific European-wide collections, called thematic Core Services - TCS), national nodes and specific institutional nodes. The key aspect is the metadata catalog. In one dimension this is described in 3 levels: (1) discovery metadata using well-known and commonly used standards such as DC (Dublin Core) to enable users (via an intelligent user interface) to search for objects within the EPOS environment relevant to their needs; (2) contextual metadata providing the context of the object described in the catalog to enable a user or the system to determine the relevance of the discovered object(s) to their requirement - the context includes projects, funding, organisations

  20. Three-tiered integration of PACS and HIS toward next generation total hospital information system.

    Science.gov (United States)

    Kim, J H; Lee, D H; Choi, J W; Cho, H I; Kang, H S; Yeon, K M; Han, M C

    1998-01-01

    The Seoul National University Hospital (SNUH) started a project to innovate the hospital information facilities. This project includes installation of high speed hospital network, development of new HIS, OCS (order communication system), RIS and PACS. This project aims at the implementation of the first total hospital information system by seamlessly integrating these systems together. To achieve this goal, we took three-tiered systems integration approach: network level, database level, and workstation level integration. There are 3 loops of networks in SNUH: proprietary star network for host computer based HIS, Ethernet based hospital LAN for OCS and RIS, and ATM based network for PACS. They are linked together at the backbone level to allow high speed communication between these systems. We have developed special communication modules for each system that allow data interchange between different databases and computer platforms. We have also developed an integrated workstation in which both the OCS and PACS application programs run on a single computer in an integrated manner allowing the clinical users to access and display radiological images as well as textual clinical information within a single user environment. A study is in progress toward a total hospital information system in SNUH by seamlessly integrating the main hospital information resources such as HIS, OCS, and PACS. With the three-tiered systems integration approach, we could successfully integrate the systems from the network level to the user application level.

  1. A method for the efficient prioritization of infrastructure renewal projects

    NARCIS (Netherlands)

    Karydas, D.M.; Gifun, Joe

    2006-01-01

    The infrastructure renewal program at MIT consists of a large number of projects with an estimated budget that could approach $1 billion. Infrastructure renewal at the Massachusetts Institute of Technology (MIT) is the process of evaluating and investing in the maintenance of facility systems and

  2. A Three Tier Architecture Applied to LiDAR Processing and Monitoring

    Directory of Open Access Journals (Sweden)

    Efrat Jaeger-Frank

    2006-01-01

    Full Text Available Emerging Grid technologies enable solving scientific problems that involve large datasets and complex analyses, which in the past were often considered difficult to solve. Coordinating distributed Grid resources and computational processes requires adaptable interfaces and tools that provide modularized and configurable environments for accessing Grid clusters and executing high performance computational tasks. Computationally intensive processes are also subject to a high risk of component failures and thus require close monitoring. In this paper we describe a scientific workflow approach to coordinate various resources via data analysis pipelines. We present a three tier architecture for LiDAR interpolation and analysis, a high performance processing of point intensive datasets, utilizing a portal, a scientific workflow engine and Grid technologies. Our proposed solution is available to the community in a unified framework through a shared cyberinfrastructure, the GEON portal, enabling scientists to focus on their scientific work and not be concerned with the implementation of the underlying infrastructure.

  3. A Tiered Model for Linking Students to the Community

    Science.gov (United States)

    Meyer, Laura Landry; Gerard, Jean M.; Sturm, Michael R.; Wooldridge, Deborah G.

    2016-01-01

    A tiered practice model (introductory, pre-internship, and internship) embedded in the curriculum facilitates community engagement and creates relevance for students as they pursue a professional identity in Human Development and Family Studies. The tiered model integrates high-impact teaching practices (HIP) and student engagement pedagogies…

  4. TMX, a new facility

    International Nuclear Information System (INIS)

    Thomas, S.R. Jr.

    1977-01-01

    As a mirror fusion facility, the Tandem Mirror Experiment (TMX) at the Lawrence Livermore Laboratory (LLL) is both new and different. It utilizes over 23,000 ft 2 of work area in three buildings and consumes over 14 kWh of energy with each shot. As a systems design, the facility is broken into discreet functional regions. Among them are a mechanical vacuum pumping system, a liquid-nitrogen system, neutral-beam and magnet power supplies, tiered structures to support these supplies, a neutron-shielded vacuum vessel, a control area, and a diagnostics area. Constraints of space, time, and cost have all affected the design

  5. Site infrastructure as required during the construction and erection of nuclear power plants

    International Nuclear Information System (INIS)

    Haas, K.F.; Wagner, H.

    1978-01-01

    In general, in an exchange of experience on constructing nuclear power plants priority is given to design and lay-out, financing, quality assurance etc., but in this paper an attempt has been made to describe range and type of site infrastructure required during construction and erection. Site infrastructure will make considerable demands on the planning, supply of material and maintenance that may result from the frequently very isolated location of power plant sites. Examples for specific values and experiences are given for a nuclear power plant with two units on the 1300-MW type at present under construction of the Persian Gulf in Iran. Data concerning the site infrastructure, including examples, are given and explained on the basis of graphs. The site is split up into a technical and a social infrastructure. The main concern of the technical site infrastructure is the timely provision and continuous availability of electric energy, water, communication grids, workshops, warehouses, offices, transport and handling facilities, as well as the provision of heavy load roads, harbour facilities, etc. The social site infrastructure in general comprises accommodation, food supplies and the care and welfare of all site personnel, which includes a hospital, school, self-service shop, and sport and recreation facilities. (author)

  6. The legal imperative to protect critical energy infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Shore, J.J.M.

    2008-03-15

    Canada's critical infrastructure is comprised of energy facilities, communications centres, finance, health care, food, government and transportation sectors. All sectors face a range of physical or cyber threats from terrorism and natural phenomenon. Failures or disruptions in the sectors can cascade through other systems and disrupt essential services. The power outage in 2003 demonstrated gaps in North America's emergency preparedness. In 2006, al-Qaida called for terrorist attacks on North American oil fields and pipelines, specifically targeting Canada. Studies have confirmed that Canada is vulnerable to attacks on energy infrastructure. Government agencies and the private sector must work ensure the safety of Canada's energy infrastructure, as the primary responsibility of government is the protection of its citizenry. The fulfilment of the government's commitment to national security cannot be achieved without protecting Canada's critical energy infrastructure. However, Canada has not yet provided a framework linking federal government with critical infrastructures, despite the fact that a draft strategy has been under development for several years. It was concluded that governments and the private sector should work together to reduce risks, protect the public, and secure the economy. National security litigation against the government and legal imperatives for energy facility owners and operators were also reviewed. 98 refs., 20 figs.

  7. An optimization of the ALICE XRootD storage cluster at the Tier-2 site in Czech Republic

    International Nuclear Information System (INIS)

    Adamova, D; Horky, J

    2012-01-01

    ALICE, as well as the other experiments at the CERN LHC, has been building a distributed data management infrastructure since 2002. Experience gained during years of operations with different types of storage managers deployed over this infrastructure has shown, that the most adequate storage solution for ALICE is the native XRootD manager developed within a CERN - SLAC collaboration. The XRootD storage clusters exhibit higher stability and availability in comparison with other storage solutions and demonstrate a number of other advantages, like support of high speed WAN data access or no need for maintaining complex databases. Two of the operational characteristics of XRootD data servers are a relatively high number of open sockets and a high Unix load. In this article, we would like to describe our experience with the tuning/optimization of machines hosting the XRootD servers, which are part of the ALICE storage cluster at the Tier-2 WLCG site in Prague, Czech Republic. The optimization procedure, in addition to boosting the read/write performance of the servers, also resulted in a reduction of the Unix load.

  8. The European Carbon dioxide Capture and Storage Laboratory Infrastructure (ECCSEL

    Directory of Open Access Journals (Sweden)

    Sverre Quale

    2016-10-01

    Full Text Available The transition to a non-emitting energy mix for power generation will take decades. This transition will need to be sustainable, e.g. economically affordable. Fossil fuels which are abundant have an important role to play in this respect, provided that Carbon Capture and Storage (CCS is progressively implemented. CCS is the only way to reduce emissions from energy intensive industries.Thus, the need for upgraded and new CCS research facilities is widely recognised among stakeholders across Europe, as emphasised by the Zero Emissions Platform (ZEP [1] and the European Energy Research Alliance on CCS (EERA-CCS [2].The European Carbon Dioxide Capture and Storage Laboratory Infrastructure, ECCSEL, provides funders, operators and researchers with significant benefits by offering access to world-class research facilities that, in many cases, are unlikely for a single nation to support in isolation. This implies creation of synergy and the avoidance of duplication as well as streamlining of funding for research facilities.ECCSEL offers open access to its advanced laboratories for talented scientists and visiting researchers to conduct cutting-edge research.In the planning of ECCSEL, gap analyses were performed and CCS technologies have been reviewed to underpin and envisage the future experimental setup; 1 Making use of readily available facilities, 2 Modifying existing facilities, and 3 Planning and building entirely new advanced facilities.The investments required for the first ten years (2015–2025 are expected to be in the range of €80–120 million. These investments show the current level of ambition, as proposed during the preparatory phase (2011–2014.Entering the implementation phase in 2015, 9 European countries signed Letter of Intent (LoI to join a ECCSEL legal entity: France, United Kingdom, Netherlands, Italy, Spain, Poland, Greece, Norway and Switzerland (active observer. As the EU ERIC-regulation [3] would offer the most

  9. Use of Self-Monitoring to Maintain Program Fidelity of Multi-Tiered Interventions

    Science.gov (United States)

    Nelson, J. Ron; Oliver, Regina M.; Hebert, Michael A.; Bohaty, Janet

    2015-01-01

    Multi-tiered system of supports represents one of the most significant advancements in improving the outcomes of students for whom typical instruction is not effective. While many practices need to be in place to make multi-tiered systems of support effective, accurate implementation of evidence-based practices by individuals at all tiers is…

  10. Fuel Handling Facility Description Document

    International Nuclear Information System (INIS)

    M.A. LaFountain

    2005-01-01

    The purpose of the facility description document (FDD) is to establish the requirements and their bases that drive the design of the Fuel Handling Facility (FHF) to allow the design effort to proceed to license application. This FDD is a living document that will be revised at strategic points as the design matures. It identifies the requirements and describes the facility design as it currently exists, with emphasis on design attributes provided to meet the requirements. This FDD was developed as an engineering tool for design control. Accordingly, the primary audience and users are design engineers. It leads the design process with regard to the flow down of upper tier requirements onto the facility. Knowledge of these requirements is essential to performing the design process. It trails the design with regard to the description of the facility. This description is a reflection of the results of the design process to date

  11. National Ignition Facility system design requirements conventional facilities SDR001

    International Nuclear Information System (INIS)

    Hands, J.

    1996-01-01

    This System Design Requirements (SDR) document specifies the functions to be performed and the minimum design requirements for the National Ignition Facility (NIF) site infrastructure and conventional facilities. These consist of the physical site and buildings necessary to house the laser, target chamber, target preparation areas, optics support and ancillary functions

  12. WHALE, a management tool for Tier-2 LCG sites

    Science.gov (United States)

    Barone, L. M.; Organtini, G.; Talamo, I. G.

    2012-12-01

    The LCG (Worldwide LHC Computing Grid) is a grid-based hierarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very similar situations, like jobs behaviour on the batch system, dataset transfers between sites, operating system and experiment software installation and configuration, monitoring of services. In this context we created WHALE (WHALE Handles Administration in an LCG Environment), a software actually used at the T2_IT_Rome site, an LCG Tier-2 for the CMS experiment. WHALE is a generic, site independent tool written in Python: it allows administrator to interact in a uniform and coherent way with several subsystems using a high level syntax which hides specific commands. The architecture of WHALE is based on the plugin concept and on the possibility of connecting the output of a plugin to the input of the next one, in a pipe-like system, giving the administrator the possibility of making complex functions by combining the simpler ones. The core of WHALE just handles the plugin orchestrations, while even the basic functions (eg. the WHALE activity logging) are performed by plugins, giving the capability to tune and possibly modify every component of the system. WHALE already provides many plugins useful for a LCG site and some more for a Tier-2 of the CMS experiment, especially in the field of job management, dataset transfer and analysis of performance results and availability tests (eg. Nagios tests, SAM tests). Thanks to its architecture and the provided plugins WHALE makes easy to perform tasks that, even if logically simple, are technically complex or tedious, like eg. closing all the worker nodes with a job-failure rate greater than a given threshold. Finally, thanks to the

  13. Grid Computing at GSI for ALICE and FAIR - present and future

    International Nuclear Information System (INIS)

    Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten

    2012-01-01

    The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE-CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.

  14. Modernization of the USGS Hawaiian Volcano Observatory Seismic Processing Infrastructure

    Science.gov (United States)

    Antolik, L.; Shiro, B.; Friberg, P. A.

    2016-12-01

    The USGS Hawaiian Volcano Observatory (HVO) operates a Tier 1 Advanced National Seismic System (ANSS) seismic network to monitor, characterize, and report on volcanic and earthquake activity in the State of Hawaii. Upgrades at the observatory since 2009 have improved the digital telemetry network, computing resources, and seismic data processing with the adoption of the ANSS Quake Management System (AQMS) system. HVO aims to build on these efforts by further modernizing its seismic processing infrastructure and strengthen its ability to meet ANSS performance standards. Most notably, this will also allow HVO to support redundant systems, both onsite and offsite, in order to provide better continuity of operation during intermittent power and network outages. We are in the process of implementing a number of upgrades and improvements on HVO's seismic processing infrastructure, including: 1) Virtualization of AQMS physical servers; 2) Migration of server operating systems from Solaris to Linux; 3) Consolidation of AQMS real-time and post-processing services to a single server; 4) Upgrading database from Oracle 10 to Oracle 12; and 5) Upgrading to the latest Earthworm and AQMS software. These improvements will make server administration more efficient, minimize hardware resources required by AQMS, simplify the Oracle replication setup, and provide better integration with HVO's existing state of health monitoring tools and backup system. Ultimately, it will provide HVO with the latest and most secure software available while making the software easier to deploy and support.

  15. Development and utilization of USGS ShakeCast for rapid post-earthquake assessment of critical facilities and infrastructure

    Science.gov (United States)

    Wald, David J.; Lin, Kuo-wan; Kircher, C.A.; Jaiswal, Kishor; Luco, Nicolas; Turner, L.; Slosky, Daniel

    2017-01-01

    The ShakeCast system is an openly available, near real-time post-earthquake information management system. ShakeCast is widely used by public and private emergency planners and responders, lifeline utility operators and transportation engineers to automatically receive and process ShakeMap products for situational awareness, inspection priority, or damage assessment of their own infrastructure or building portfolios. The success of ShakeCast to date and its broad, critical-user base mandates improved software usability and functionality, including improved engineering-based damage and loss functions. In order to make the software more accessible to novice users—while still utilizing advanced users’ technical and engineering background—we have developed a “ShakeCast Workbook”, a well documented, Excel spreadsheet-based user interface that allows users to input notification and inventory data and export XML files requisite for operating the ShakeCast system. Users will be able to select structure based on a minimum set of user-specified facility (building location, size, height, use, construction age, etc.). “Expert” users will be able to import user-modified structural response properties into facility inventory associated with the HAZUS Advanced Engineering Building Modules (AEBM). The goal of the ShakeCast system is to provide simplified real-time potential impact and inspection metrics (i.e., green, yellow, orange and red priority ratings) to allow users to institute customized earthquake response protocols. Previously, fragilities were approximated using individual ShakeMap intensity measures (IMs, specifically PGA and 0.3 and 1s spectral accelerations) for each facility but we are now performing capacity-spectrum damage state calculations using a more robust characterization of spectral deamnd.We are also developing methods for the direct import of ShakeMap’s multi-period spectra in lieu of the assumed three-domain design spectrum (at 0.3s for

  16. WindScanner.eu - a new Remote Sensing Research Infrastructure for On- and Offshore Wind Energy

    DEFF Research Database (Denmark)

    Mikkelsen, Torben; Siggaard Knudsen, Søren; Sjöholm, Mikael

    2012-01-01

    will be disseminated throughout Europe to pilot European wind energy research centers. The new research infrastructure will become an open source infrastructure that also invites collaboration with wind energy related atmospheric scientists and wind energy industry overseas. Recent achievements with 3D Wind......A new remote sensing based research infrastructure for atmospheric boundary-layer wind and turbulence measurements named WindScanner have during the past three years been in its early phase of development at DTU Wind Energy in Denmark. During the forthcoming three years the technology......Scanners and spin-off innovation activity are described. The Danish WindScanner.dk research facility is build from new and fast-scanning remote sensing equipment spurred from achievements within fiber optics and telecommunication technologies. At the same time the wind energy society has demanded excessive 3D wind...

  17. Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation

    Science.gov (United States)

    Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.

    2017-09-01

    Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  18. COLLABORATIVE MULTI-SCALE 3D CITY AND INFRASTRUCTURE MODELING AND SIMULATION

    Directory of Open Access Journals (Sweden)

    M. Breunig

    2017-09-01

    Full Text Available Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  19. Preprocessing in a Tiered Sensor Network for Habitat Monitoring

    Directory of Open Access Journals (Sweden)

    Hanbiao Wang

    2003-03-01

    Full Text Available We investigate task decomposition and collaboration in a two-tiered sensor network for habitat monitoring. The system recognizes and localizes a specified type of birdcalls. The system has a few powerful macronodes in the first tier, and many less powerful micronodes in the second tier. Each macronode combines data collected by multiple micronodes for target classification and localization. We describe two types of lightweight preprocessing which significantly reduce data transmission from micronodes to macronodes. Micronodes classify events according to their cross-zero rates and discard irrelevant events. Data about events of interest is reduced and compressed before being transmitted to macronodes for target localization. Preliminary experiments illustrate the effectiveness of event filtering and data reduction at micronodes.

  20. Cyber Threats to Nuclear Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Robert S. Anderson; Paul Moskowitz; Mark Schanfein; Trond Bjornard; Curtis St. Michel

    2010-07-01

    Nuclear facility personnel expend considerable efforts to ensure that their facilities can maintain continuity of operations against both natural and man-made threats. Historically, most attention has been placed on physical security. Recently however, the threat of cyber-related attacks has become a recognized and growing world-wide concern. Much attention has focused on the vulnerability of the electric grid and chemical industries to cyber attacks, in part, because of their use of Supervisory Control and Data Acquisition (SCADA) systems. Lessons learned from work in these sectors indicate that the cyber threat may extend to other critical infrastructures including sites where nuclear and radiological materials are now stored. In this context, this white paper presents a hypothetical scenario by which a determined adversary launches a cyber attack that compromises the physical protection system and results in a reduced security posture at such a site. The compromised security posture might then be malevolently exploited in a variety of ways. The authors conclude that the cyber threat should be carefully considered for all nuclear infrastructures.

  1. Cyber Threats to Nuclear Infrastructures

    International Nuclear Information System (INIS)

    Anderson, Robert S.; Moskowitz, Paul; Schanfein, Mark; Bjornard, Trond; St. Michel, Curtis

    2010-01-01

    Nuclear facility personnel expend considerable efforts to ensure that their facilities can maintain continuity of operations against both natural and man-made threats. Historically, most attention has been placed on physical security. Recently however, the threat of cyber-related attacks has become a recognized and growing world-wide concern. Much attention has focused on the vulnerability of the electric grid and chemical industries to cyber attacks, in part, because of their use of Supervisory Control and Data Acquisition (SCADA) systems. Lessons learned from work in these sectors indicate that the cyber threat may extend to other critical infrastructures including sites where nuclear and radiological materials are now stored. In this context, this white paper presents a hypothetical scenario by which a determined adversary launches a cyber attack that compromises the physical protection system and results in a reduced security posture at such a site. The compromised security posture might then be malevolently exploited in a variety of ways. The authors conclude that the cyber threat should be carefully considered for all nuclear infrastructures.

  2. CANISTER HANDLING FACILITY DESCRIPTION DOCUMENT

    Energy Technology Data Exchange (ETDEWEB)

    J.F. Beesley

    2005-04-21

    The purpose of this facility description document (FDD) is to establish requirements and associated bases that drive the design of the Canister Handling Facility (CHF), which will allow the design effort to proceed to license application. This FDD will be revised at strategic points as the design matures. This FDD identifies the requirements and describes the facility design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This FDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This FDD is part of an iterative design process. It leads the design process with regard to the flowdown of upper tier requirements onto the facility. Knowledge of these requirements is essential in performing the design process. The FDD follows the design with regard to the description of the facility. The description provided in this FDD reflects the current results of the design process.

  3. CANISTER HANDLING FACILITY DESCRIPTION DOCUMENT

    International Nuclear Information System (INIS)

    Beesley. J.F.

    2005-01-01

    The purpose of this facility description document (FDD) is to establish requirements and associated bases that drive the design of the Canister Handling Facility (CHF), which will allow the design effort to proceed to license application. This FDD will be revised at strategic points as the design matures. This FDD identifies the requirements and describes the facility design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This FDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This FDD is part of an iterative design process. It leads the design process with regard to the flowdown of upper tier requirements onto the facility. Knowledge of these requirements is essential in performing the design process. The FDD follows the design with regard to the description of the facility. The description provided in this FDD reflects the current results of the design process

  4. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    Science.gov (United States)

    Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia

    2014-06-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  5. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    International Nuclear Information System (INIS)

    Llamas, Ramón Medrano; Megino, Fernando Harald Barreiro; Cinquilli, Mattia; Kucharczyk, Katarzyna; Denis, Marek Kamil

    2014-01-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  6. Sectoral Plan 'Deep Geological Disposal', Stage 2. Proposed site areas for the surface facilities of the deep geological repositories as well as for their access infrastructure. Annexes

    International Nuclear Information System (INIS)

    2011-12-01

    In line with the provisions of the nuclear energy legislation, the sites for deep geological disposal of Swiss radioactive waste are selected in a three-stage Sectoral Plan process (Sectoral Plan for Deep Geological Disposal). The disposal sites are specified in Stage 3 of the selection process with the granting of a general licence in accordance with the Nuclear Energy Act. The first stage of the process was completed on 30 th November 2011, with the decision of the Federal Council to incorporate the six geological siting regions proposed by the National Cooperative for the Disposal of Radioactive Waste (NAGRA) into the Sectoral Plan for Deep Geological Disposal, for further evaluation in Stage 2. The decision also specifies the planning perimeters within which the surface facilities and shaft locations for the repositories will be constructed. In the second stage of the process, at least two geological siting regions each will be specified for the repository for low- and intermediate-level waste (L/ILW) and for the high-level waste (HLW) repository and these will undergo detailed geological investigation in Stage 3. For each of these potential siting regions, at least one location for the surface facility and a corridor for the access infrastructure will also be specified. NAGRA is responsible, at the beginning of Stage 2, for submitting proposals for potential locations for the surface facilities and their access infrastructure to the Federal Office of Energy (SFOE); these are then considered by the regional participation bodies in the siting regions. The general report and the present annexes volume document these proposals. In Stage 2, under the lead of the SFOE, socio-economic-ecological studies will also be carried out to investigate the impact of a repository project on the environment, economy and society. The present reports also contain the input data to be provided by NAGRA for the generic (site-independent) part of these impact studies. A meaningful

  7. NEW ATTRACTION MECHANISM OF INVESTMENT RESOURCES FOR FINANCING INFRASTRUCTURE PROJECTS

    Directory of Open Access Journals (Sweden)

    A. S. Popkova

    2013-01-01

    Full Text Available The paper analyzes revenue-yielding bonds as an efficient tool of governmental and municipal management. Conditions required for issue of  security papers have considered in the paper. The paper describes main  stages of the infrastructure bonded loan implementation. The global experience in financing construction and upgrading of infrastructure facilities through the bond issue has been investigated in the paper. The contains an analysis of risks while executing infrastructure projects and proposes methods for their minimization.

  8. Assessing the nutritional quality of diets of Canadian children and adolescents using the 2014 Health Canada Surveillance Tool Tier System.

    Science.gov (United States)

    Jessri, Mahsa; Nishi, Stephanie K; L'Abbe, Mary R

    2016-05-10

    Health Canada's Surveillance Tool (HCST) Tier System was developed in 2014 with the aim of assessing the adherence of dietary intakes with Eating Well with Canada's Food Guide (EWCFG). HCST uses a Tier system to categorize all foods into one of four Tiers based on thresholds for total fat, saturated fat, sodium, and sugar, with Tier 4 reflecting the unhealthiest and Tier 1 the healthiest foods. This study presents the first application of the HCST to examine (i) the dietary patterns of Canadian children, and (ii) the applicability and relevance of HCST as a measure of diet quality. Data were from the nationally-representative, cross-sectional Canadian Community Health Survey 2.2. A total of 13,749 participants aged 2-18 years who had complete lifestyle and 24-hour dietary recall data were examined. Dietary patterns of Canadian children and adolescents demonstrated a high prevalence of Tier 4 foods within the sub-groups of processed meats and potatoes. On average, 23-31 % of daily calories were derived from "other" foods and beverages not recommended in EWCFG. However, the majority of food choices fell within the Tier 2 and 3 classifications due to lenient criteria used by the HCST for classifying foods. Adherence to the recommendations presented in the HCST was associated with closer compliance to meeting nutrient Dietary Reference Intake recommendations, however it did not relate to reduced obesity as assessed by body mass index (p > 0.05). EWCFG recommendations are currently not being met by most children and adolescents. Future nutrient profiling systems need to incorporate both positive and negative nutrients and an overall score. In addition, a wider range of nutrient thresholds should be considered for HCST to better capture product differences, prevent categorization of most foods as Tiers 2-3 and provide incentives for product reformulation.

  9. National waste management infrastructure in Ghana

    International Nuclear Information System (INIS)

    Darko, E.O.; Fletcher, J.J.

    1998-01-01

    Radioactive materials have been used in Ghana for more than four decades. Radioactive waste generated from their applications in various fields has been managed without adequate infrastructure and any legal framework to control and regulate them. The expanded use of nuclear facilities and radiation sources in Ghana with the concomitant exposure to human population necessitates effective infrastructure to deal with the increasing problems of waste. The Ghana Atomic Energy Act 204 (1963) and the Radiation Protection Instrument LI 1559 (1993) made inadequate provision for the management of waste. With the amendment of the Atomic Energy Act, PNDCL 308, a radioactive waste management centre has been established to take care of all waste in the country. To achieve the set objectives for an effective waste management regime, a waste management regulation has been drafted and relevant codes of practice are being developed to guide generators of waste, operators of waste management facilities and the regulatory authority. (author)

  10. The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

    International Nuclear Information System (INIS)

    González de la Hoz, S

    2012-01-01

    Originally the ATLAS Computing and Data Distribution model assumed that the Tier-2s should keep on disk collectively at least one copy of all “active” AOD and DPD datasets. Evolution of ATLAS Computing and Data model requires changes in ATLAS Tier-2s policy for the data replication, dynamic data caching and remote data access. Tier-2 operations take place completely asynchronously with respect to data taking. Tier-2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier-1s but will progressively be shared with Tier-2s as well. The availability of disk space at Tier-2s is extremely important in the ATLAS Computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier-2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier-2s are going to be used more efficiently. In this way Tier-1s and Tier-2s are becoming more equivalent for the network and the hierarchy of Tier-1, 2 is less strict. This paper presents the usage of Tier-2s resources in different Grid activities, caching of data at Tier-2s, and their role in the analysis in the new ATLAS Computing and Data model.

  11. Onbekend maakt onbemind : De One-Tier Board bij Royal Dutch Shell - Geleerde lessen

    NARCIS (Netherlands)

    dr. Stefan Peij; Michiel Brandjes

    2012-01-01

    Op 1 januari 2013 wordt de Wet Bestuur en Toezicht naar verwachting van kracht1. Na invoering van deze wet kunnen bedrijven gemakkelijker kiezen uit de one-tier board en de two-tier board als bestuursmodel. Shell heeft in 2005 het one-tier model ingevoerd en kan dus al de eerste balans opmaken.

  12. Towards a theory of tiered testing.

    Science.gov (United States)

    Hansson, Sven Ove; Rudén, Christina

    2007-06-01

    Tiered testing is an essential part of any resource-efficient strategy for the toxicity testing of a large number of chemicals, which is required for instance in the risk management of general (industrial) chemicals, In spite of this, no general theory seems to be available for the combination of single tests into efficient tiered testing systems. A first outline of such a theory is developed. It is argued that chemical, toxicological, and decision-theoretical knowledge should be combined in the construction of such a theory. A decision-theoretical approach for the optimization of test systems is introduced. It is based on expected utility maximization with simplified assumptions covering factual and value-related information that is usually missing in the development of test systems.

  13. Using Gamification to Raise Awareness of Cyber Threats to Critical National Infrastructure

    OpenAIRE

    Cook, Allan; Smith, Richard; Maglaras, Leandros; Janicke, Helge

    2016-01-01

    Linked to the SCIPS tabletop game Senior executives of critical national infrastructure facilities face competing requirements for investment budgets. Whilst the impact of a cyber attack upon such utilities is potentially catastrophic, the risks to continued operations from failing to upgrade ageing infrastructure, or not meeting mandated regulatory regimes, are considered higher given the demonstrable impact of such circumstances. As cyber attacks on critical national infrastructure remai...

  14. 77 FR 71481 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2012-11-30

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY... tax rates for calendar year 2013 as required by section 3241(d) of the Internal Revenue Code (26 U.S.C. 3241). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of...

  15. Petascale Computational Systems

    OpenAIRE

    Bell, Gordon; Gray, Jim; Szalay, Alex

    2007-01-01

    Computational science is changing to be data intensive. Super-Computers must be balanced systems; not just CPU farms but also petascale IO and networking arrays. Anyone building CyberInfrastructure should allocate resources to support a balanced Tier-1 through Tier-3 design.

  16. Distribution of green infrastructure along walkable roads

    Science.gov (United States)

    Low-income and minority neighborhoods frequently lack healthful resources to which wealthier communities have access. Though important, the addition of facilities such as recreation centers can be costly and take time to implement. Urban green infrastructure, such as street trees...

  17. Romanian contribution to research infrastructure database for EPOS

    Science.gov (United States)

    Ionescu, Constantin; Craiu, Andreea; Tataru, Dragos; Balan, Stefan; Muntean, Alexandra; Nastase, Eduard; Oaie, Gheorghe; Asimopolos, Laurentiu; Panaiotu, Cristian

    2014-05-01

    European Plate Observation System - EPOS is a long-term plan to facilitate integrated use of data, models and facilities from mainly distributed existing, but also new, research infrastructures for solid Earth Science. In EPOS Preparatory Phase were integrated the national Research Infrastructures at pan European level in order to create the EPOS distributed research infrastructures, structure in which, at the present time, Romania participates by means of the earth science research infrastructures of the national interest declared on the National Roadmap. The mission of EPOS is to build an efficient and comprehensive multidisciplinary research platform for solid Earth Sciences in Europe and to allow the scientific community to study the same phenomena from different points of view, in different time periods and spatial scales (laboratory and field experiments). At national scale, research and monitoring infrastructures have gathered a vast amount of geological and geophysical data, which have been used by research networks to underpin our understanding of the Earth. EPOS promotes the creation of comprehensive national and regional consortia, as well as the organization of collective actions. To serve the EPOS goals, in Romania a group of National Research Institutes, together with their infrastructures, gathered in an EPOS National Consortium, as follows: 1. National Institute for Earth Physics - Seismic, strong motion, GPS and Geomagnetic network and Experimental Laboratory; 2. National Institute of Marine Geology and Geoecology - Marine Research infrastructure and Euxinus integrated regional Black Sea observation and early-warning system; 3. Geological Institute of Romania - Surlari National Geomagnetic Observatory and National lithoteque (the latter as part of the National Museum of Geology) 4. University of Bucharest - Paleomagnetic Laboratory After national dissemination of EPOS initiative other Research Institutes and companies from the potential

  18. ABOUT GENERAL INFRASTRUCTURE AND ACCOMMODATION SYSTEM IN ROMANIAN BALNEOLOGY

    Directory of Open Access Journals (Sweden)

    ILIE ROTARIU

    2013-12-01

    Full Text Available A strong infrastructure is a precondition for the development of balneology. On this base new tourism might build the modern services that supply the experiences. The key factor is the labor force: an EU project about labor force in Romania and Bulgaria in balneology allow us to present the preliminary findings focusing on general infrastructure and accommodation which allow the development of the balneology as well as the additional conditions as the existence of a social pact, easy access facilities etc. Our paper gives more details about the accommodation facilities in Romania insisting about the results of the transition and privatization of the former socialist facilities and the transformation of the property into private ones and the consequences of this. It also present the capability of new developed accommodation units built after 1990 and how they might compete in an international competition. The findings force us to conclude that the actual facilities do not allow the balneology resorts to compete in the international competition and might fill only a poor and low demanding tourists

  19. A Two-Tier Multiple Choice Questions to Diagnose Thermodynamic Misconception of Thai and Laos Students

    Science.gov (United States)

    Kamcharean, Chanwit; Wattanakasiwich, Pornrat

    The objective of this study was to diagnose misconceptions of Thai and Lao students in thermodynamics by using a two-tier multiple-choice test. Two-tier multiple choice questions consist of the first tier, a content-based question and the second tier, a reasoning-based question. Data of student understanding was collected by using 10 two-tier multiple-choice questions. Thai participants were the first-year students (N = 57) taking a fundamental physics course at Chiang Mai University in 2012. Lao participants were high school students in Grade 11 (N = 57) and Grade 12 (N = 83) at Muengnern high school in Xayaboury province, Lao PDR. As results, most students answered content-tier questions correctly but chose incorrect answers for reason-tier questions. When further investigating their incorrect reasons, we found similar misconceptions as reported in previous studies such as incorrectly relating pressure with temperature when presenting with multiple variables.

  20. Experience running a distributed Tier-2 in Spain for the ATLAS experiment

    International Nuclear Information System (INIS)

    March, L; Hoz, S Gonzales de la; Kaci, M; Fassi, F; Fernandez, A; Lamas, A; Salt, J; Sanchez, J; Peso, J del; Fernandez, P; Munoz, L; Pardo, J; Espinal, X; Garitaonandia, H; Mir, M L; Nadal, J; Pacheco, A; Shuskov, S

    2008-01-01

    The main role of the Tier-2s is to provide computing resources for production of physics simulated events and distributed data analysis. The Spanish ATLAS Tier-2 is geographically distributed among three HEP institutes: IFAE (Barcelona), IFIC (Valencia) and UAM (Madrid). Currently it has a computing power of 430 kSI2K CPU, a disk storage capacity of 87 TB and a network bandwidth, connecting the three sites and the nearest Tier-1 (PIC), of 1 Gb/s. These resources will be increased according to the ATLAS Computing Model with time in parallel to those of all ATLAS Tier-2s. Since 2002, it has been participating into the different Data Challenge exercises. Currently, it is achieving around 1.5% of the whole ATLAS collaboration production in the framework of the Computing System Commissioning exercise. A distributed data management is also arising as an important issue in the daily activities of the Tier-2. The distribution in three sites has shown to be useful due to an increasing service redundancy, a faster solution of problems, the share of computing expertise and know-how. Experience gained running the distributed Tier-2 in order to be ready at the LHC start-up will be presented

  1. Sectoral Plan 'Deep Geological Disposal', Stage 2. Proposed site areas for the surface facilities of the deep geological repositories as well as for their access infrastructure. General report

    International Nuclear Information System (INIS)

    2011-12-01

    In line with the provisions of the nuclear energy legislation, the sites for deep geological disposal of Swiss radioactive waste are selected in a three-stage Sectoral Plan process (Sectoral Plan for Deep Geological Disposal). The disposal sites are specified in Stage 3 of the selection process with the granting of a general licence in accordance with the Nuclear Energy Act. The first stage of the process was completed on 30 th November 2011, with the decision of the Federal Council to incorporate the six geological siting regions proposed by the National Cooperative for the Disposal of Radioactive Waste (NAGRA) into the Sectoral Plan for Deep Geological Disposal, for further evaluation in Stage 2. The decision also specifies the planning perimeters within which the surface facilities and shaft locations for the repositories will be constructed. In the second stage of the process, at least two geological siting regions each will be specified for the repository for low- and intermediate-level waste (L/ILW) and for the high-level waste (HLW) repository and these will undergo detailed geological investigation in Stage 3. For each of these potential siting regions, at least one location for the surface facility and a corridor for the access infrastructure will also be specified. NAGRA is responsible, at the beginning of Stage 2, for submitting proposals for potential locations for the surface facilities and their access infrastructure to the Federal Office of Energy (SFOE); these are then considered by the regional participation bodies in the siting regions. The present report and its annexes volume document these proposals. In Stage 2, under the lead of the SFOE, socio-economic-ecological studies will also be carried out to investigate the impact of a repository project on the environment, economy and society. The present reports also contain the input data to be provided by NAGRA for the generic (site-independent) part of these impact studies. A meaningful discussion

  2. Predecommissioning radiological survey of BR3 infrastructures

    International Nuclear Information System (INIS)

    Cantrel, E.

    2006-01-01

    The decommissioning of the BR3 (Belgian Reactor 3) approaches its final phase, in which the buildings infrastructures are being decontaminated targeting either the reuse or the conventional demolition after denuclearisation. In a PWR with a significant operation lifetime, such as the BR3, maintenance operations, failure and/or leakages, incidents occurring in the different circuits of the plant result in the contamination of the buildings infrastructures at various activity levels with contaminants penetrating/migrating up to several cm inside the material bulk structure. Moreover, the BR3 bioshield has been exposed to rather high neutron leakage fluxes during the reactor operation and is therefore activated. The different radiological situations faced require the implementation of different characterization methodologies based on the use of an adequate combination of measurement devices and/or sampling devices. The non-destructive assay of activation depth using the ISOCS (In Situ Object Counting System) and a specific spectra analysis protocol has been tested in 2004. The first results obtained were encouraging and the qualification program for activated material is running. We are now investigating the possibilities to extend the methodology to building materials contaminated in-depth with 137 Cs. The overall process of dismantling/denuclearization of the BR3 building infrastructure consists of: (1) a preliminary characterization and determination of the contamination or activation depth; (2) the determination of the decontamination method; (3) the effective decontamination and clean up; (4) a possible intermediate characterization followed by an additional decontamination step; and (5) the characterization for clearance. The more accurate the preliminary survey is performed the less additional control/decontamination cycles are needed to reach clearance levels. The pre-decommissioning characterization process includes a preliminary categorisation (see picture

  3. Hanford Site Infrastructure Plan

    International Nuclear Information System (INIS)

    1990-01-01

    The Hanford Site Infrastructure Plan (HIP) has been prepared as an overview of the facilities, utilities, systems, and services that support all activities on the Hanford Site. Its purpose is three-fold: to examine in detail the existing condition of the Hanford Site's aging utility systems, transportation systems, Site services and general-purpose facilities; to evaluate the ability of these systems to meet present and forecasted Site missions; to identify maintenance and upgrade projects necessary to ensure continued safe and cost-effective support to Hanford Site programs well into the twenty-first century. The HIP is intended to be a dynamic document that will be updated accordingly as Site activities, conditions, and requirements change. 35 figs., 25 tabs

  4. Infrastructure data in the Internet: network statements and infrastructure register; Infrastrukturdaten im INTERNET: Schienennetz-Benutzungsbedingungen und Infrastrukturregister

    Energy Technology Data Exchange (ETDEWEB)

    Schmitt, A.; Kuntze, P [DB Netz AG, Frankfurt am Main (Germany); Hoefler, A. [Fichtner Consulting und IT AG, Stuttgart (Germany)

    2007-09-15

    Faced with European Community directives on the liberalization of railway traffic and their transposition into national legal requirements, DB Netz AG, the German infrastructure manager, considers that it has no alternative to using the Internet to publish numerous items of information as regards its railway network and infrastructure facilities. The company took the decision to seek synergies in combining these requirements with achieving internal benefits and so it chose a modern technical architecture that is going to be able to keep pace with the growing demands as regards data complexity, range of functions and number of users. The architecture that first came into being for publishing network statements now also forms the basis for DB Netz' infrastructure register and, in parallel with that, for around twenty intranet solutions, used by more than 5000 users. (orig.)

  5. Eco-logical : an ecosystem approach to developing transportation infrastructure projects in a changing environment

    Science.gov (United States)

    2009-09-13

    The development of infrastructure facilities can negatively impact critical habitat and essential ecosystems. There are a variety of techniques available to avoid, minimize, and mitigate negative impacts of existing infrastructure as well as future i...

  6. Energy infrastructure of the United States and projected siting needs: Scoping ideas, identifying issues and options. Draft report of the Department of Energy Working Group on Energy Facility Siting to the Secretary

    Energy Technology Data Exchange (ETDEWEB)

    1993-12-01

    A Department of Energy (DOE) Working Group on Energy Facility Siting, chaired by the Policy Office with membership from the major program and staff offices of the Department, reviewed data regarding energy service needs, infrastructure requirements, and constraints to siting. The Working Group found that the expeditious siting of energy facilities has important economic, energy, and environmental implications for key Administration priorities.

  7. 3rd International Civil and Infrastructure Engineering Conference

    CERN Document Server

    Hamid, Nor; Arshad, Mohd; Arshad, Ahmad; Ridzuan, Ahmad; Awang, Haryati

    2016-01-01

    The special focus of these proceedings is on the areas of infrastructure engineering and sustainability management. They provide detailed information on innovative research developments in construction materials and structures, in addition to a compilation of interdisciplinary findings combining nano-materials and engineering. The coverage of cutting-edge infrastructure and sustainability issues in engineering includes earthquakes, bioremediation, synergistic management, timber engineering, flood management and intelligent transport systems.

  8. Critical infrastructure – content, structure and problems of its protection

    Directory of Open Access Journals (Sweden)

    Ladislav Hofreiter

    2014-06-01

    Full Text Available Security, economic and social stability of the country, its functionality but also protecting the lives and property of citizens are dependent on the proper functioning of many infrastructure systems of state. Disruptions, lack or destruction of such systems, institutions, facilities and other services could cause disruption of social stability and national security, provoke a crisis situation or seriously affect the operation of state and local governments in crisis situations. This is known as critical infrastructure. It is in the interest of the State to the critical infrastructure effectively protected.

  9. PUBLIC-PRIVATE PARTNERSHIP AS EFFECTIVE MECHANISM OF SPORTS INFRASTRUCTURE

    Directory of Open Access Journals (Sweden)

    D. P. Moskvin

    2012-01-01

    Full Text Available The article discusses the current state of sports infrastructure in Russia and also explores the experience of using public-private partnership at Olympic facilities construction in Sochi.

  10. Assessing the Nutritional Quality of Diets of Canadian Adults Using the 2014 Health Canada Surveillance Tool Tier System

    Directory of Open Access Journals (Sweden)

    Mahsa Jessri

    2015-12-01

    Full Text Available The 2014 Health Canada Surveillance Tool (HCST was developed to assess adherence of dietary intakes with Canada’s Food Guide. HCST classifies foods into one of four Tiers based on thresholds for sodium, total fat, saturated fat and sugar, with Tier 1 representing the healthiest and Tier 4 foods being the unhealthiest. This study presents the first application of HCST to assess (a dietary patterns of Canadians; and (b applicability of this tool as a measure of diet quality among 19,912 adult participants of Canadian Community Health Survey 2.2. Findings indicated that even though most of processed meats and potatoes were Tier 4, the majority of reported foods in general were categorized as Tiers 2 and 3 due to the adjustable lenient criteria used in HCST. Moving from the 1st to the 4th quartile of Tier 4 and “other” foods/beverages, there was a significant trend towards increased calories (1876 kcal vs. 2290 kcal and “harmful” nutrients (e.g., sodium as well as decreased “beneficial” nutrients. Compliance with the HCST was not associated with lower body mass index. Future nutrient profiling systems need to incorporate both “positive” and “negative” nutrients, an overall score and a wider range of nutrient thresholds to better capture food product differences.

  11. The socio-demographic aspects of building social infrastructure in the city of Moscow

    Directory of Open Access Journals (Sweden)

    Strashnova Yuliya gennad’evna

    2018-03-01

    Full Text Available Subject: the influence of the socio-demographic factor on the development of the network of facilities of the social infrastructure of the city (on the example of Moscow is explored. The interrelation between socio-demographic development and the formation of the consumer demand for services and various types of facilities is revealed. The main socio-demographic concepts and measures determining a need to develop and site the facilities throughout the city are considered. Thus, the social, age and family structure of the resident population determine the typology and functional structure of facilities. The “daytime” population, its structure and concentration areas determine the volume and the new construction sites of residential buildings. The “temporary” population (including tourists, transit passengers, business travelers and other population categories, staying in the city for more than 24 hours specifies the need for the construction of hotels, hostels and other collective accommodation facilities. Economically active population creates demand for jobs, including those created on the basis of social infrastructure. Objectives: to explain the need for taking into account the modern and perspective trends in population development during the preparation of the territorial and urban planning documents; to consider the particularities of the socio-demographic characteristics included when forecasting the need to develop the social facilities, creating workplaces, taking into account the transition to the economy of services and information technologies, in designing a citywide system, including transport hubs. Materials and methods: the research was conducted on the basis of official statistics (Rosstat, Mosgorstat, of line departments and offices of the city of Moscow. Statistical, analytical, sociological methods of research, expert assessments, analogies, field survey, mathematical modeling are used. Results: modern and perspective

  12. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  13. 9 CFR 3.103 - Facilities, outdoor.

    Science.gov (United States)

    2010-01-01

    ... Administrator. The fence must be constructed so that it protects marine mammals by restricting animals and... effective natural barrier that restricts the marine mammals to the facility and restricts entry by animals... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Facilities, outdoor. 3.103 Section 3...

  14. AUTOMATIC GENERATION OF ROAD INFRASTRUCTURE IN 3D FOR VEHICLE SIMULATORS

    Directory of Open Access Journals (Sweden)

    Adam Orlický

    2017-12-01

    Full Text Available One of the modern methods of testing new systems and interfaces in vehicles is testing in a vehicle simulator. Providing quality models of virtual scenes is one of tasks for driver-car interaction interface simulation. Nowadays, there exist many programs for creating 3D models of road infrastructures, but most of these programs are very expensive or canÂtt export models for the following use. Therefore, a plug-in has been developed at the Faculty of Transportation Sciences in Prague. It can generate road infrastructure by Czech standard for designing roads (CSN 73 6101. The uniqueness of this plug-in is that it is the first tool for generating road infrastructure in NURBS representation. This type of representation brings more exact models and allows to optimize transfer for creating quality models for vehicle simulators. The scenes created by this plug-in were tested on vehicle simulators. The results have shown that with newly created scenes drivers had a much better feeling in comparison to previous scenes.

  15. Assessing the nutritional quality of diets of Canadian children and adolescents using the 2014 Health Canada Surveillance Tool Tier System

    Directory of Open Access Journals (Sweden)

    Mahsa Jessri

    2016-05-01

    Full Text Available Abstract Background Health Canada’s Surveillance Tool (HCST Tier System was developed in 2014 with the aim of assessing the adherence of dietary intakes with Eating Well with Canada’s Food Guide (EWCFG. HCST uses a Tier system to categorize all foods into one of four Tiers based on thresholds for total fat, saturated fat, sodium, and sugar, with Tier 4 reflecting the unhealthiest and Tier 1 the healthiest foods. This study presents the first application of the HCST to examine (i the dietary patterns of Canadian children, and (ii the applicability and relevance of HCST as a measure of diet quality. Methods Data were from the nationally-representative, cross-sectional Canadian Community Health Survey 2.2. A total of 13,749 participants aged 2–18 years who had complete lifestyle and 24-hour dietary recall data were examined. Results Dietary patterns of Canadian children and adolescents demonstrated a high prevalence of Tier 4 foods within the sub-groups of processed meats and potatoes. On average, 23–31 % of daily calories were derived from “other” foods and beverages not recommended in EWCFG. However, the majority of food choices fell within the Tier 2 and 3 classifications due to lenient criteria used by the HCST for classifying foods. Adherence to the recommendations presented in the HCST was associated with closer compliance to meeting nutrient Dietary Reference Intake recommendations, however it did not relate to reduced obesity as assessed by body mass index (p > 0.05. Conclusions EWCFG recommendations are currently not being met by most children and adolescents. Future nutrient profiling systems need to incorporate both positive and negative nutrients and an overall score. In addition, a wider range of nutrient thresholds should be considered for HCST to better capture product differences, prevent categorization of most foods as Tiers 2–3 and provide incentives for product reformulation.

  16. Investigating Safety, Safeguards and Security (3S) Synergies to Support Infrastructure Development and Risk-Informed Methodologies for 3S by Design

    International Nuclear Information System (INIS)

    Suzuki, M.; Izumi, Y.; Kimoto, T.; Naoi, Y.; Inoue, T.; Hoffheins, B.

    2010-01-01

    In 2008, Japan and other G8 countries pledged to support the Safeguards, Safety, and Security (3S) Initiative to raise awareness of 3S worldwide and to assist countries in setting up nuclear energy infrastructures that are essential cornerstones of a successful nuclear energy program. The goals of the 3S initiative are to ensure that countries already using nuclear energy or those planning to use nuclear energy are supported by strong national programs in safety, security, and safeguards not only for reliability and viability of the programs, but also to prove to the international audience that the programs are purely peaceful and that nuclear material is properly handled, accounted for, and protected. In support of this initiative, Japan Atomic Energy Agency (JAEA) has been conducting detailed analyses of the R and D programs and cultures of each of the 'S' areas to identify overlaps where synergism and efficiencies might be realized, to determine where there are gaps in the development of a mature 3S culture, and to coordinate efforts with other Japanese and international organizations. As an initial outcome of this study, incoming JAEA employees are being introduced to 3S as part of their induction training and the idea of a President's Award program is being evaluated. Furthermore, some overlaps in 3S missions might be exploited to share facility instrumentation as with Joint-Use-Equipment (JUE), in which cameras and radiation detectors, are shared by the State and IAEA. Lessons learned in these activities can be applied to developing more efficient and effective 3S infrastructures for incorporating into Safeguards by Design methodologies. They will also be useful in supporting human resources and technology development projects associated with Japan's planned nuclear security center for Asia, which was announced during the 2010 Nuclear Security Summit. In this presentation, a risk-informed approach regarding integration of 3S will be introduced. An initial

  17. WHALE, a management tool for Tier-2 LCG sites

    International Nuclear Information System (INIS)

    Barone, L M; Organtini, G; Talamo, I G

    2012-01-01

    The LCG (Worldwide LHC Computing Grid) is a grid-based hierarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very similar situations, like jobs behaviour on the batch system, dataset transfers between sites, operating system and experiment software installation and configuration, monitoring of services. In this context we created WHALE (WHALE Handles Administration in an LCG Environment), a software actually used at the T2 I T R ome site, an LCG Tier-2 for the CMS experiment. WHALE is a generic, site independent tool written in Python: it allows administrator to interact in a uniform and coherent way with several subsystems using a high level syntax which hides specific commands. The architecture of WHALE is based on the plugin concept and on the possibility of connecting the output of a plugin to the input of the next one, in a pipe-like system, giving the administrator the possibility of making complex functions by combining the simpler ones. The core of WHALE just handles the plugin orchestrations, while even the basic functions (eg. the WHALE activity logging) are performed by plugins, giving the capability to tune and possibly modify every component of the system. WHALE already provides many plugins useful for a LCG site and some more for a Tier-2 of the CMS experiment, especially in the field of job management, dataset transfer and analysis of performance results and availability tests (eg. Nagios tests, SAM tests). Thanks to its architecture and the provided plugins WHALE makes easy to perform tasks that, even if logically simple, are technically complex or tedious, like eg. closing all the worker nodes with a job-failure rate greater than a given threshold. Finally, thanks to the

  18. LANL: Weapons Infrastructure Briefing to Naval Reactors, July 18, 2017

    Energy Technology Data Exchange (ETDEWEB)

    Chadwick, Frances [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-07-18

    Presentation slides address: The Laboratory infrastructure supports hundreds of high hazard, complex operations daily; LANL’s unique science and engineering infrastructure is critical to delivering on our mission; LANL FY17 Budget & Workforce; Direct-Funded Infrastructure Accounts; LANL Org Chart; Weapons Infrastructure Program Office; The Laboratory’s infrastructure relies on both Direct and Indirect funding; NA-50’s Operating, Maintenance & Recapitalization funding is critical to the execution of the mission; Los Alamos is currently executing several concurrent Line Item projects; Maintenance @ LANL; NA-50 is helping us to address D&D needs; We are executing a CHAMP Pilot Project at LANL; G2 = Main Tool for Program Management; MDI: Future Investments are centered on facilities with a high Mission Dependency Index; Los Alamos hosted first “Deep Dive” in November 2016; Safety, Infrastructure & Operations is one of the most important programs at LANL, and is foundational for our mission success.

  19. Uplink Interference Analysis for Two-tier Cellular Networks with Diverse Users under Random Spatial Patterns

    OpenAIRE

    Bao, Wei; Liang, Ben

    2013-01-01

    Multi-tier architecture improves the spatial reuse of radio spectrum in cellular networks, but it introduces complicated heterogeneity in the spatial distribution of transmitters, which brings new challenges in interference analysis. In this work, we present a stochastic geometric model to evaluate the uplink interference in a two-tier network considering multi-type users and base stations. Each type of tier-1 users and tier-2 base stations are modeled as independent homogeneous Poisson point...

  20. HNF - Helmholtz Nano Facility

    Directory of Open Access Journals (Sweden)

    Wolfgang Albrecht

    2017-05-01

    Full Text Available The Helmholtz Nano Facility (HNF is a state-of-the-art cleanroom facility. The cleanroom has ~1100 m2 with cleanroom classes of DIN ISO 1-3. HNF operates according to VDI DIN 2083, Good Manufacturing Practice (GMP and aquivalent to Semiconductor Industry Association (SIA standards. HNF is a user facility of Forschungszentrum Jülich and comprises a network of facilities, processes and systems for research, production and characterization of micro- and nanostructures. HNF meets the basic supply of micro- and nanostructures for nanoelectronics, fluidics. micromechanics, biology, neutron and energy science, etc.. The task of HNF is rapid progress in nanostructures and their technology, offering efficient access to infrastructure and equipment. HNF gives access to expertise and provides resources in production, synthesis, characterization and integration of structures, devices and circuits. HNF covers the range from basic research to application oriented research facilitating a broad variety of different materials and different sample sizes.

  1. Risk Assessment Methodology for Protecting Our Critical Physical Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    BIRINGER,BETTY E.; DANNEELS,JEFFREY J.

    2000-12-13

    Critical infrastructures are central to our national defense and our economic well-being, but many are taken for granted. Presidential Decision Directive (PDD) 63 highlights the importance of eight of our critical infrastructures and outlines a plan for action. Greatly enhanced physical security systems will be required to protect these national assets from new and emerging threats. Sandia National Laboratories has been the lead laboratory for the Department of Energy (DOE) in developing and deploying physical security systems for the past twenty-five years. Many of the tools, processes, and systems employed in the protection of high consequence facilities can be adapted to the civilian infrastructure.

  2. Three Tiers of CSR

    DEFF Research Database (Denmark)

    Aggerholm, Helle Kryger; Trapp, Leila

    2014-01-01

    for understanding corporate approaches to CSR by examining how several companies position themselves thematically in CEO introductions to sustainability reports. On the basis of this, we then evaluate the practical value of this typology for assisting those who work with CSR strategy. The analysis revealed...... of the identified strengths and weaknesses of the typology, we develop a practitioner-focused, three-tiered model that can strategically guide the development of CSR programs....

  3. Late-life depression in Rural China: do village infrastructure and availability of community resources matter?

    Science.gov (United States)

    Li, Lydia W; Liu, Jinyu; Zhang, Zhenmei; Xu, Hongwei

    2015-07-01

    This study aimed to examine whether physical infrastructure and availability of three types of community resources (old-age income support, healthcare facilities, and elder activity centers) in rural villages are associated with depressive symptoms among older adults in rural China. Data were from the 2011 baseline survey of the Chinese Health and Retirement Longitudinal Study (CHARLS). The sample included 3824 older adults aged 60 years or older residing in 301 rural villages across China. A score of 12 on the 10-item Center for Epidemiologic Studies Depression Scale was used as the cutoff for depressed versus not depressed. Village infrastructure was indicated by an index summing deficiency in six areas: drinking water, fuel, road, sewage, waste management, and toilet facilities. Three dichotomous variables indicated whether income support, healthcare facility, and elder activity center were available in the village. Respondents' demographic characteristics (age, gender, marital status, and living arrangements), health status (chronic conditions and physical disability), and socioeconomic status (education, support from children, health insurance, household luxury items, and housing quality) were covariates. Multilevel logistic regression was conducted. Controlling for individuals' socioeconomic status, health status, and demographic characteristics, village infrastructure deficiency was positively associated with the odds of being depressed among rural older Chinese, whereas the provision of income support and healthcare facilities in rural villages was associated with lower odds. Village infrastructure and availability of community resources matter for depressive symptoms in rural older adults. Improving infrastructure, providing old-age income support, and establishing healthcare facilities in villages could be effective strategies to prevent late-life depression in rural China. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Second-Tier Database for Ecosystem Focus, 2000-2001 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Van Holmes, Chris; Muongchanh, Christine; Anderson, James J. (University of Washington, School of Aquatic and Fishery Sciences, Seattle, WA)

    2001-11-01

    The Second-Tier Database for Ecosystem Focus (Contract 00004124) provides direct and timely public access to Columbia Basin environmental, operational, fishery and riverine data resources for federal, state, public and private entities. The Second-Tier Database known as Data Access in Realtime (DART) does not duplicate services provided by other government entities in the region. Rather, it integrates public data for effective access, consideration and application.

  5. Manufacturing Demonstration Facility (MDF)

    Data.gov (United States)

    Federal Laboratory Consortium — The U.S. Department of Energy Manufacturing Demonstration Facility (MDF) at Oak Ridge National Laboratory (ORNL) provides a collaborative, shared infrastructure to...

  6. Otter trawls in Greece: Landing profiles and potential mιtiers

    Directory of Open Access Journals (Sweden)

    S. KATSANEVAKIS

    2010-02-01

    Full Text Available A fleet of 326 bottom trawlers operate in Greek Seas and their landings represent approximately 30% of the total fish production in Greece. In this study, otter trawl landings data were analyzed in order to identify potential métiers. Landings data between 2002 and 2006 were used, collected from 42 ports in the Aegean and East Ionian Sea. A three-step procedure was applied to identify potential métiers: the first step involved a factorial analysis of the log-transformed landings profiles, the second step a classification of the factorial coordinates, and the third step a further aggregation of clusters based on expert knowledge. In all, six potential métiers were identified in the Aegean Sea, and five in the Ionian Sea. The most important target species were European hake (Merluccius merluccius, deepwater pink shrimp (Parapenaeus longirostris, red mullet (Mullus barbatus, caramote prawn (Melicertus kerathurus, picarel (Spicara smaris, cephalopods, bogue (Boops boops, anglers (Lophiusspp., and Norway lobster (Nephrops norvegicus. Otter trawls in Greece use more or less the same gear with minor modification, and métier selection is basically reflected as a choice of geographical sub-area and hauling depth. The limitations of using landings profiles to identify métiers and the need for further verification are discussed.

  7. Integration of research infrastructures and ecosystem models toward development of predictive ecology

    Science.gov (United States)

    Luo, Y.; Huang, Y.; Jiang, J.; MA, S.; Saruta, V.; Liang, G.; Hanson, P. J.; Ricciuto, D. M.; Milcu, A.; Roy, J.

    2017-12-01

    The past two decades have witnessed rapid development in sensor technology. Built upon the sensor development, large research infrastructure facilities, such as National Ecological Observatory Network (NEON) and FLUXNET, have been established. Through networking different kinds of sensors and other data collections at many locations all over the world, those facilities generate large volumes of ecological data every day. The big data from those facilities offer an unprecedented opportunity for advancing our understanding of ecological processes, educating teachers and students, supporting decision-making, and testing ecological theory. The big data from the major research infrastructure facilities also provides foundation for developing predictive ecology. Indeed, the capability to predict future changes in our living environment and natural resources is critical to decision making in a world where the past is no longer a clear guide to the future. We are living in a period marked by rapid climate change, profound alteration of biogeochemical cycles, unsustainable depletion of natural resources, and deterioration of air and water quality. Projecting changes in future ecosystem services to the society becomes essential not only for science but also for policy making. We will use this panel format to outline major opportunities and challenges in integrating research infrastructure and ecosystem models toward developing predictive ecology. Meanwhile, we will also show results from an interactive model-experiment System - Ecological Platform for Assimilating Data into models (EcoPAD) - that have been implemented at the Spruce and Peatland Responses Under Climatic and Environmental change (SPRUCE) experiment in Northern Minnesota and Montpellier Ecotron, France. EcoPAD is developed by integrating web technology, eco-informatics, data assimilation techniques, and ecosystem modeling. EcoPAD is designed to streamline data transfer seamlessly from research infrastructure

  8. Central Region Green Infrastructure

    Data.gov (United States)

    Minnesota Department of Natural Resources — This Green Infrastructure data is comprised of 3 similar ecological corridor data layers ? Metro Conservation Corridors, green infrastructure analysis in counties...

  9. The Evolving role of Tier2s in ATLAS with the new Computing and Data Distribution Model

    CERN Document Server

    Gonzalez de la Hoz, S; The ATLAS collaboration

    2012-01-01

    Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds should keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect to data taking. Tier2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier1s but will progressively move to Tier2s as well. The availability of disk space at Tier2s is extremely important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used mo...

  10. The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

    CERN Document Server

    Gonzalez de la Hoz, S

    2012-01-01

    Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds should keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect to data taking. Tier2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier1s but will progressively move to Tier2s as well. The availability of disk space at Tier2s is extremely important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used mo...

  11. Implications of generator siting for CO2 pipeline infrastructure

    International Nuclear Information System (INIS)

    Newcomer, Adam; Apt, Jay

    2008-01-01

    The location of a new electric power generation system with carbon capture and sequestration (CCS) affects the profitability of the facility and determines the amount of infrastructure required to connect the plant to the larger world. Using a probabilistic analysis, we examine where a profit-maximizing power producer would locate a new generator with carbon capture in relation to a fuel source, electric load, and CO 2 sequestration site. Based on models of costs for transmission lines, CO 2 pipelines, and fuel transportation, we find that it is always preferable to locate a CCS power facility nearest the electric load, reducing the losses and costs of bulk electricity transmission. This result suggests that a power system with significant amounts of CCS requires a very large CO 2 pipeline infrastructure

  12. A Novel Architectural Concept for Enhanced 5G Network Facilities

    Directory of Open Access Journals (Sweden)

    Chochliouros Ioannis P.

    2017-01-01

    Full Text Available The 5G ESSENCE project’s context is based on the concept of Edge Cloud Computing and Small Cell-as-a-Service (SCaaS -as both have been previously identified in the SESAME 5G-PPP project of phase 1- and further “promotes” their role and/or influences within the related 5G vertical markets. 5G ESSENCE’s core innovation is focused upon the development/provision of a highly flexible and scalable platform, offering benefits to the involved market actors. The present work identifies a variety of challenges to be fulfilled by the 5G ESSENCE, in the scope of an enhanced architectural framework. The proposed technical approach exploits the profits of the centralization of Small Cell functions as scale grows through an edge cloud environment, based on a two-tier architecture with the first distributed tier being for offering low latency services and the second centralized tier being for the provision of high processing power for computing-intensive network applications. This permits decoupling the control and user planes of the Radio Access Network (RAN and achieving the advantages of Cloud-RAN without the enormous fronthaul latency restrictions. The use of end-to-end network slicing mechanisms allows for sharing the related infrastructure among multiple operators/vertical industries and customizing its capabilities on a per-tenant basis, creating a neutral host market and reducing operational costs.

  13. Vehicle speed guidance strategy at signalized intersection based on cooperative vehicle infrastructure system

    Directory of Open Access Journals (Sweden)

    Fengyuan JIA

    2017-10-01

    Full Text Available In order to reduce stopping time of vehicle at a signalized intersection, aiming at the difficulty, even the impossibility to obtain real-time queue length of intersection in third and fourth-tier cities in China sometimes, a speed guidance strategy based on cooperative vehicle infrastructure system is put forward and studied. For validating the strategy, the traffic signal timing data of the intersection at Hengshan Road and North Fengming Lake Road in Wuhu is collected by a vehicular traffic signal reminder system which is designed. The simulation experiments using the acquired data are done by software VISSIM. The simulation results demonstrate that the strategy under high and low traffic flow can effectively decrease the link travel-time, reducing average ratio is 9.2 % and 13.0 %, respectively, and the effect under low traffic flow is better than that under high traffic flow. The strategy improves efficiency of traffic at a signalized intersection and provides an idea for the application of vehicle speed guidance based on cooperative vehicle infrastructure system.

  14. Guarding America: Security Guards and U.S. Critical Infrastructure Protection

    National Research Council Canada - National Science Library

    Parfomak, Paul W

    2004-01-01

    The Bush Administration's 2003 National Strategy for the Physical Protection of Critical Infrastructures and Key Assets indicates that security guards are an important source of protection for critical facilities...

  15. Autonomous rendezvous and capture development infrastructure

    Science.gov (United States)

    Bryan, Thomas C.; Roe, Fred; Coker, Cindy; Nelson, Pam; Johnson, B.

    1991-01-01

    In the development of the technology for autonomous rendezvous and docking, key infrastructure capabilities must be used for effective and economical development. This involves facility capabilities, both equipment and personnel, to devise, develop, qualify, and integrate ARD elements and subsystems into flight programs. One effective way of reducing technical risks in developing ARD technology is the use of the ultimate test facility, using a Shuttle-based reusable free-flying testbed to perform a Technology Demonstration Test Flight which can be structured to include a variety of additional sensors, control schemes, and operational approaches. This conceptual testbed and flight demonstration will be used to illustrate how technologies and facilities at MSFC can be used to develop and prove an ARD system.

  16. The Long Shadow of Port Infrastructure in Germany

    DEFF Research Database (Denmark)

    Mitze, Timo Friedel; Breidenbach, Philipp

    2016-01-01

    . Since it is very likely that results from least square estimations suffer from endogeneity problems, we base the identification on exogenous long-run instruments. In particular, port facilities built before the industrial revolution provide an adequate instrument for current port infrastructure since...

  17. Knowledge Management Systems and Open Innovation in Second Tier UK Universities

    Science.gov (United States)

    Chaston, Ian

    2012-01-01

    The purpose of this paper is to examine the performance of second tier UK universities in relation to the effectiveness of their knowledge management systems and involvement in open innovation. Data were acquired using a mail survey of academic staff in social science and business faculties in second tier institutions. The results indicate that…

  18. A Step-by-Step Guide to Tier 2 Behavioral Progress Monitoring

    Science.gov (United States)

    Bruhn, Allison L.; McDaniel, Sara C.; Rila, Ashley; Estrapala, Sara

    2018-01-01

    Students who are at risk for or show low-intensity behavioral problems may need targeted, Tier 2 interventions. Often, Tier 2 problem-solving teams are charged with monitoring student responsiveness to intervention. This process may be difficult for those who are not trained in data collection and analysis procedures. To aid practitioners in these…

  19. UO3 plant turnover - facility description document

    International Nuclear Information System (INIS)

    Clapp, D.A.

    1995-01-01

    This document was developed to provide a facility description for those portions of the UO 3 Facility being transferred to Bechtel Hanford Company, Inc. (BHI) following completion of facility deactivation. The facility and deactivated state condition description is intended only to serve as an overview of the plant as it is being transferred to BHI

  20. Extending access to essential services against constraints: the three-tier health service delivery system in rural China (1949-1980).

    Science.gov (United States)

    Feng, Xing Lin; Martinez-Alvarez, Melisa; Zhong, Jun; Xu, Jin; Yuan, Beibei; Meng, Qingyue; Balabanova, Dina

    2017-05-23

    China has made remarkable progress in scaling up essential services during the last six decades, making health care increasingly available in rural areas. This was partly achieved through the building of a three-tier health system in the 1950s, established as a linked network with health service facilities at county, township and village level, to extend services to the whole population. We developed a Theory of Change to chart the policy context, contents and mechanisms that may have facilitated the establishment of the three-tier health service delivery system in rural China. We systematically synthesized the best available evidence on how China achieved universal access to essential services in resource-scarce rural settings, with a particular emphasis on the experiences learned before the 1980s, when the country suffered a particularly acute lack of resources. The search identified only three peered-reviewed articles that fit our criteria for scientific rigor. We therefore drew extensively on government policy documents, and triangulated them with other publications and key informant interviews. We found that China's three-tier health service delivery system was established in response to acute health challenges, including high fertility and mortality rates. Health system resources were extremely low in view of the needs and insufficient to extend access to even basic care. With strong political commitment to rural health and a "health-for-all" policy vision underlying implementation, a three-tier health service delivery model connecting villages, townships and counties was quickly established. We identified several factors that contributed to the success of the three-tier system in China: a realistic health human resource development strategy, use of mass campaigns as a vehicle to increase demand, an innovative financing mechanisms, public-private partnership models in the early stages of scale up, and an integrated approach to service delivery. An

  1. Spatial planning, infrastructure and implementation: Implications for ...

    African Journals Online (AJOL)

    Infrastructure plays key roles in shaping the spatial form of the city at a macro- ... matrix of knowledge and skills is produced, and the way these fields of study have been ... the case in African cities. ... ternational urban development circles, .... facilities and associated bulk network ..... Urban and Regional Planning Course.

  2. A method for the efficient prioritization of infrastructure renewal projects

    International Nuclear Information System (INIS)

    Karydas, D.M.; Gifun, J.F.

    2006-01-01

    The infrastructure renewal program at MIT consists of a large number of projects with an estimated budget that could approach $1 billion. Infrastructure renewal at the Massachusetts Institute of Technology (MIT) is the process of evaluating and investing in the maintenance of facility systems and basic structure to preserve existing campus buildings. The selection and prioritization of projects must be addressed with a systematic method for the optimal allocation of funds and other resources. This paper presents a case study of a prioritization method utilizing multi-attribute utility theory. This method was developed at MIT's Department of Nuclear Engineering and was deployed by the Department of Facilities after appropriate modifications were implemented to address the idiosyncrasies of infrastructure renewal projects and the competing criteria and constraints that influence the judgment of the decision-makers. Such criteria include minimization of risk, optimization of economic impact, and coordination with academic policies, programs, and operations of the Institute. A brief overview of the method is presented, as well as the results of its application to the prioritization of infrastructure renewal projects. Results of workshops held at MIT with the participation of stakeholders demonstrate the feasibility of the prioritization method and the usefulness of this approach

  3. Public Private Partnerships: A possible alternative for delivery of infrastructure projects in Africa

    OpenAIRE

    Salim Bwanali; Pantaleo Rwelamila

    2017-01-01

    It is estimated that Africa needs $93 billion annually until 2020 in order to bridge its infrastructure deficit. It is through significant investment in infrastructure development that economic growth and poverty alleviation can be enhanced. However central to all construction projects is an effective and sustainable procurement system. There is a notable shift by some African governments to turn to the private sector to design, build, finance and operate infrastructure facilities previously ...

  4. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    International Nuclear Information System (INIS)

    Capone, V; Esposito, R; Pardi, S; Taurino, F; Tortone, G

    2012-01-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  5. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    Science.gov (United States)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  6. Identification student’s misconception of heat and temperature using three-tier diagnostic test

    Science.gov (United States)

    Suliyanah; Putri, H. N. P. A.; Rohmawati, L.

    2018-03-01

    The objective of this research is to develop a Three-Tier Diagnostic Test (TTDT) to identify the student's misconception of heat and temperature. Stages of development include: analysis, planning, design, development, evaluation and revise. The results of this study show that (1) the quality of the three-tier type diagnostic test instrument developed has been expressed well with the following details: (a) Internal validity of 88.19% belonging to the valid category. (b) External validity of empirical construct validity test using Pearson Product Moment obtained 0.43 is classified and result of empirical construct validity test obtained false positives 6.1% and false negatives 5.9% then the instrument was valid. (c) Test reliability by using Cronbach’s Alpha of 0.98 which means acceptable. (d) The 80% difficulty level test is quite difficult. (2) Student misconceptions on the temperature of heat and displacement materials based on the II test the highest (84%), the lowest (21%), and the non-misconceptions (7%). (3) The highest cause of misconception among students is associative thinking (22%) and the lowest is caused by incomplete or incomplete reasoning (11%). Three-Tier Diagnostic Test (TTDT) could identify the student's misconception of heat and temperature.

  7. Do Physical Proximity and Availability of Adequate Infrastructure at Public Health Facility Increase Institutional Delivery? A Three Level Hierarchical Model Approach.

    Science.gov (United States)

    Patel, Rachana; Ladusingh, Laishram

    2015-01-01

    This study aims to examine the inter-district and inter-village variation of utilization of health services for institutional births in EAG states in presence of rural health program and availability of infrastructures. District Level Household Survey-III (2007-08) data on delivery care and facility information was used for the purpose. Bivariate results examined the utilization pattern by states in presence of correlates of women related while a three-level hierarchical multilevel model illustrates the effect of accessibility, availability of health facility and community health program variables on the utilization of health services for institutional births. The study found a satisfactory improvement in state Rajasthan, Madhya Pradesh and Orissa, importantly, in Bihar and Uttaranchal. The study showed that increasing distance from health facility discouraged institutional births and there was a rapid decline of more than 50% for institutional delivery as the distance to public health facility exceeded 10 km. Additionally, skilled female health worker (ANM) and observed improved public health facility led to significantly increase the probability of utilization as compared to non-skilled ANM and not-improved health centers. Adequacy of essential equipment/laboratory services required for maternal care significantly encouraged deliveries at public health facility. District/village variables neighborhood poverty was negatively related to institutional delivery while higher education levels in the village and women's residing in more urbanized districts increased the utilization. "Inter-district" variation was 14 percent whereas "between-villages" variation for the utilization was 11 percent variation once controlled for all the three-level variables in the model. This study suggests that the mere availability of health facilities is necessary but not sufficient condition to promote utilization until the quality of service is inadequate and inaccessible considering

  8. Do Physical Proximity and Availability of Adequate Infrastructure at Public Health Facility Increase Institutional Delivery? A Three Level Hierarchical Model Approach.

    Directory of Open Access Journals (Sweden)

    Rachana Patel

    Full Text Available This study aims to examine the inter-district and inter-village variation of utilization of health services for institutional births in EAG states in presence of rural health program and availability of infrastructures. District Level Household Survey-III (2007-08 data on delivery care and facility information was used for the purpose. Bivariate results examined the utilization pattern by states in presence of correlates of women related while a three-level hierarchical multilevel model illustrates the effect of accessibility, availability of health facility and community health program variables on the utilization of health services for institutional births. The study found a satisfactory improvement in state Rajasthan, Madhya Pradesh and Orissa, importantly, in Bihar and Uttaranchal. The study showed that increasing distance from health facility discouraged institutional births and there was a rapid decline of more than 50% for institutional delivery as the distance to public health facility exceeded 10 km. Additionally, skilled female health worker (ANM and observed improved public health facility led to significantly increase the probability of utilization as compared to non-skilled ANM and not-improved health centers. Adequacy of essential equipment/laboratory services required for maternal care significantly encouraged deliveries at public health facility. District/village variables neighborhood poverty was negatively related to institutional delivery while higher education levels in the village and women's residing in more urbanized districts increased the utilization. "Inter-district" variation was 14 percent whereas "between-villages" variation for the utilization was 11 percent variation once controlled for all the three-level variables in the model. This study suggests that the mere availability of health facilities is necessary but not sufficient condition to promote utilization until the quality of service is inadequate and

  9. CERN’s job diversity on display at the Cité des Métiers

    CERN Multimedia

    Corinne Pralavorio

    2015-01-01

    From 3 to 8 November, CERN took part in the Cité des Métiers careers fair in Geneva. Almost 10,000 people stopped by the Organization’s stand, where they were introduced to the wide range of professions practised at CERN.   Stefano Agosta, a telecommunications expert from the IT department, performs a geolocalisation demonstration with digital radio receivers for visitors to the CERN stand. Network engineering, computer graphics, geomatics, translation, video production, fire and rescue, law, computer-aided design… People often don’t realise how varied the job opportunities are at CERN. More than one hundred professions are present at the Laboratory. This was the message conveyed by representatives of various departments, including human resources and the visits service, at the CERN stand at the Cité des Métiers careers fair, from 3 to 8 November. CERN’s stand was part of the International Geneva section of the ...

  10. Multi-tiered sports arbitrations in the Republic of Serbia

    Directory of Open Access Journals (Sweden)

    Galantić Miloš B.

    2015-01-01

    Full Text Available Contrary to popular perception of the legal profession, multi-tier arbitrations are neither new, nor uncommon phenomenon. With growing need of the community to arbitration becomes real, not just theoretical, alternative to judicial resolution of disputes, arbitration accepts more judicial characteristics, among which is one of the most important and at the same time controversial - multi-tiered dispute resolution. Multi-tiered arbitration proceeding is traditionally present in commercial and investment arbitrations. However, in recent decades, significant international arbitration institutions introduced the option for consensual review of arbitration awards. Sports law is an area where, by the end of the twentieth century, the phenomenon was unnoticed present. The international sports community, as a precondition for the survival of autonomous settlement of disputes, choose dispute settlement by arbitration, but with a number of significant modifications. One of the most specific is multi-tiered arbitration, especially regarding the most important cases. The main reason for such behaviour is the aspiration of the international sports community, following the example of national courts, to organize efficient, quality and final way of resolving disputes within its jurisdiction. Permanent Court of arbitration of the Olympic Committee of Serbia follows the mentioned logic, thanks to the provisions of the Sports Act and contrary to the Arbitration act, and introduces the possibility of reviewing its decision in front of the Court of arbitration for sport based in Lausanne.

  11. Retrospective on the Seniors' Council Tier 1 LDRD portfolio.

    Energy Technology Data Exchange (ETDEWEB)

    Ballard, William Parker

    2012-04-01

    This report describes the Tier 1 LDRD portfolio, administered by the Seniors Council between 2003 and 2011. 73 projects were sponsored over the 9 years of the portfolio at a cost of $10.5 million which includes $1.9M of a special effort in directed innovation targeted at climate change and cyber security. Two of these Tier 1 efforts were the seeds for the Grand Challenge LDRDs in Quantum Computing and Next Generation Photovoltaic conversion. A few LDRDs were terminated early when it appeared clear that the research was not going to succeed. A great many more were successful and led to full Tier 2 LDRDs or direct customer sponsorship. Over a dozen patents are in various stages of prosecution from this work, and one project is being submitted for an R and D 100 award.

  12. 12 CFR 404.3 - Public reference facilities.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Public reference facilities. 404.3 Section 404.3 Banks and Banking EXPORT-IMPORT BANK OF THE UNITED STATES INFORMATION DISCLOSURE Procedures for Disclosure of Records Under the Freedom of Information Act. § 404.3 Public reference facilities. Ex-Im Bank...

  13. Nordic research infrastructures for plant phenotyping

    Directory of Open Access Journals (Sweden)

    Kristiina Himanen

    2018-03-01

    Full Text Available Plant phenomics refers to the systematic study of plant phenotypes. Together with closely monitored, controlled climates, it provides an essential component for the integrated analysis of genotype-phenotype-environment interactions. Currently, several plant growth and phenotyping facilities are under establishment globally, and numerous facilities are already in use. Alongside the development of the research infrastructures, several national and international networks have been established to support shared use of the new methodology. In this review, an overview is given of the Nordic plant phenotyping and climate control facilities. Since many areas of phenomics such as sensor-based phenotyping, image analysis and data standards are still developing, promotion of educational and networking activities is especially important. These facilities and networks will be instrumental in tackling plant breeding and plant protection challenges. They will also provide possibilities to study wild species and their ecological interactions under changing Nordic climate conditions.

  14. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Anisenkov, A; Belov, S; Kaplin, V; Korol, A; Skovpen, K; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2012-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  15. Statistics of the uplink co-tier interference in closed access heterogeneous networks

    KAUST Repository

    Tabassum, Hina

    2013-09-01

    In this paper, we derive a statistical model of the co-tier interference in closed access two tier heterogeneous wireless cellular networks with femtocell deployments. The derived model captures the impact of bounded path loss model, wall penetration loss, user distributions, random locations, and density of the femtocells. Firstly, we derive the analytical expressions for the probability density function (PDF) and moment generating function (MGF) of the co-tier interference considering a single femtocell interferer by exploiting the random disc line picking theory from geometric probability. We then derive the MGF of the cumulative interference from all femtocell interferers considering full spectral reuse in each femtocell. Orthogonal spectrum partitioning is assumed between the macrocell and femtocell networks to avoid any cross-tier interference. Finally, the accuracy of the derived expressions is validated through Monte-Carlo simulations and the expressions are shown to be useful in quantifying important network performance metrics such as ergodic capacity. © 2013 IEEE.

  16. ShadowNet: An Active Defense Infrastructure for Insider Cyber Attack Prevention

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Beaver, Justin M [ORNL; Treadwell, Jim N [ORNL

    2012-01-01

    The ShadowNet infrastructure for insider cyber attack prevention is comprised of a tiered server system that is able to dynamically redirect dangerous/suspicious network traffic away from production servers that provide web, ftp, database and other vital services to cloned virtual machines in a quarantined environment. This is done transparently from the point of view of both the attacker and normal users. Existing connections, such as SSH sessions, are not interrupted. Any malicious activity performed by the attacker on a quarantined server is not reflected on the production server. The attacker is provided services from the quarantined server, which creates the impression that the attacks performed are successful. The activities of the attacker on the quarantined system are able to be recorded much like a honeypot system for forensic analysis.

  17. CERIF-CRIS for the European e-Infrastructure

    Directory of Open Access Journals (Sweden)

    K Jeffery

    2010-04-01

    Full Text Available The European e-infrastructure is the ICT support for research although the infrastructure will be extended for commercial/business use. It supports the research process across funding agencies to research institutions to innovation. It supports experimental facilities, modelling and simulation, communication between researchers, and workflow of research processes and research management. We propose the core should be CERIF: an EU recommendation to member states for exchanging research information and for homogeneous access to heterogeneous information. CERIF can also integrate associated systems (such as finance, human resource, project management, and library services and provides interoperation among research institutions, research funders, and innovators.

  18. CERN Infrastructure Evolution

    CERN Document Server

    Bell, Tim

    2012-01-01

    The CERN Computer Centre is reviewing strategies for optimizing the use of the existing infrastructure in the future, and in the likely scenario that any extension will be remote from CERN, and in the light of the way other large facilities are today being operated. Over the past six months, CERN has been investigating modern and widely-used tools and procedures used for virtualisation, clouds and fabric management in order to reduce operational effort, increase agility and support unattended remote computer centres. This presentation will give the details on the project’s motivations, current status and areas for future investigation.

  19. Guidance for the application of an assessment methodology for innovative nuclear energy systems. INPRO manual - Infrastructure. Vol. 3 of the final report of phase 1 of the International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO)

    International Nuclear Information System (INIS)

    2008-11-01

    The International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO) was initiated in the year 2000, based on a resolution of the IAEA General Conference (GC(44)/RES/21). The main objectives of INPRO are (1) to help to ensure that nuclear energy is available to contribute in fulfilling energy needs in the 21st century in a sustainable manner, (2) to bring together both technology holders and technology users to consider jointly the international and national actions required to achieve desired innovations in nuclear reactors and fuel cycles; and (3) to create a forum to involve all relevant stakeholders that will have an impact on, draw from, and complement the activities of existing institutions, as well as ongoing initiatives at the national and international level. The INPRO manual is comprised of an overview volume and eight additional volumes covering the areas of economics (Volume 2), infrastructure (Volume 3, outlined here), waste management (Volume 4), proliferation resistance (Volume 5), physical protection (Volume 6), environment (Volume 7), safety of reactors (Volume 8), and safety of nuclear fuel cycle facilities (Volume 9). Within INPRO, the term infrastructure can be defined as the collection of capabilities of institutions involved in a nuclear power program in a given country that are necessary for the successful deployment (or enlargement) and operation of an INS, including legal and institutional, industrial and economic, and socio-political features. Within INPRO, the definition of an INS includes activities and facilities (i.e. components) at both the front end of the fuel cycle (e.g., mining, enrichment, fuel fabrication) and the back end (e.g., reprocessing, storage, and repository) (Section 4.2.1 of Volume 1 of the INPRO manual. Consequently, within INPRO, such facilities are not considered to be a part of the INPRO area of infrastructure, albeit that they influence the size of the necessary infrastructure required in a given

  20. Search-based Tier Assignment for Optimising Offline Availability in Multi-tier Web Applications

    OpenAIRE

    Philips, Laure; De Koster, Joeri; De Meuter, Wolfgang; De Roover, Coen

    2017-01-01

    Web programmers are often faced with several challenges in the development process of modern, rich internet applications. Technologies for the different tiers of the application have to be selected: a server-side language, a combination of JavaScript, HTML and CSS for the client, and a database technology. Meeting the expectations of contemporary web applications requires even more effort from the developer: many state of the art libraries must be mastered and glued together. This leads to an...

  1. Development of two tier test to assess conceptual understanding in heat and temperature

    Science.gov (United States)

    Winarti; Cari; Suparmi; Sunarno, Widha; Istiyono, Edi

    2017-01-01

    Heat and temperature is a concept that has been learnt from primary school to undergraduate levels. One problem about heat and temperature is that they are presented abstractly, theoretical concept. A student conceptual frameworks develop from their daily experiences. The purpose of this research was to develop a two-tier test of heat and temperature concept and measure conceptual understanding of heat and temperature of the student. This study consist of two method is qualitative and quantitative method. The two-tier test was developed using procedures defined by Borg and Gall. The two-tier test consisted of 20 question and was tested for 137 students for collecting data. The result of the study showed that the two-tier test was effective in determining the students’ conceptual understanding and also it might be used as an alternative for assessment and evaluation of students’ achievement

  2. Cross-validation and refinement of the Stoffenmanager as a first tier exposure assessment tool for REACH

    NARCIS (Netherlands)

    Schinkel, J.; Fransman, W.; Heussen, H.; Kromhout, H.; Marquart, H.; Tielemans, E.

    2010-01-01

    Objectives: For regulatory risk assessment under REACH a tiered approach is proposed in which the first tier models should provide a conservative exposure estimate that can discriminate between scenarios which are of concern and those which are not. The Stoffenmanager is mentioned as a first tier

  3. Cross-validation and refinement of the Stoffenmanager as a first tier exposure assessment tool for REACH.

    NARCIS (Netherlands)

    Schinkel, J.; Fransman, W.; Heussen, H.; Kromhout, H.; Marquart, H.; Tielemans, E.

    2010-01-01

    OBJECTIVES: For regulatory risk assessment under REACH a tiered approach is proposed in which the first tier models should provide a conservative exposure estimate that can discriminate between scenarios which are of concern and those which are not. The Stoffenmanager is mentioned as a first tier

  4. Effect of cage tier and age on performance, egg quality and stress ...

    African Journals Online (AJOL)

    This study was conducted to investigate the effects of cage tier and age on performance characteristics of layer hybrids, egg quality and some stress parameters. Ninety laying hens (hybrid ATAK-S) of similar bodyweights were used in the experiment. They were housed in three-tier conventional battery cages (bottom, ...

  5. 33 CFR 105.205 - Facility Security Officer (FSO).

    Science.gov (United States)

    2010-07-01

    ... training in the following, as appropriate: (i) Relevant international laws and codes, and recommendations... well as any plans to change the facility or facility infrastructure prior to amending the FSP; and (18...

  6. Improving water, sanitation and hygiene in health-care facilities, Liberia.

    Science.gov (United States)

    Abrampah, Nana Mensah; Montgomery, Maggie; Baller, April; Ndivo, Francis; Gasasira, Alex; Cooper, Catherine; Frescas, Ruben; Gordon, Bruce; Syed, Shamsuzzoha Babar

    2017-07-01

    The lack of proper water and sanitation infrastructures and poor hygiene practices in health-care facilities reduces facilities' preparedness and response to disease outbreaks and decreases the communities' trust in the health services provided. To improve water and sanitation infrastructures and hygiene practices, the Liberian health ministry held multistakeholder meetings to develop a national water, sanitation and hygiene and environmental health package. A national train-the-trainer course was held for county environmental health technicians, which included infection prevention and control focal persons; the focal persons acted as change agents. In Liberia, only 45% of 701 surveyed health-care facilities had an improved water source in 2015, and only 27% of these health-care facilities had proper disposal for infectious waste. Local ownership, through engagement of local health workers, was introduced to ensure development and refinement of the package. In-county collaborations between health-care facilities, along with multisectoral collaboration, informed national level direction, which led to increased focus on water and sanitation infrastructures and uptake of hygiene practices to improve the overall quality of service delivery. National level leadership was important to identify a vision and create an enabling environment for changing the perception of water, sanitation and hygiene in health-care provision. The involvement of health workers was central to address basic infrastructure and hygiene practices in health-care facilities and they also worked as stimulators for sustainable change. Further, developing a long-term implementation plan for national level initiatives is important to ensure sustainability.

  7. TESLA Test Facility. Status

    International Nuclear Information System (INIS)

    Aune, B.

    1996-01-01

    The TESLA Test Facility (TTF), under construction at DESY by an international collaboration, is an R and D test bed for the superconducting option for future linear e+/e-colliders. It consists of an infrastructure to process and test the cavities and of a 500 MeV linac. The infrastructure has been installed and is fully operational. It includes a complex of clean rooms, an ultra-clean water plant, a chemical etching installation and an ultra-high vacuum furnace. The linac will consist of four cryo-modules, each containing eight 1 meter long nine-cell cavities operated at 1.3 GHz. The base accelerating field is 15 MV/m. A first injector will deliver a low charge per bunch beam, with the full average current (8 mA in pulses of 800 μs). A more powerful injector based on RF gun technology will ultimately deliver a beam with high charge and low emittance to allow measurements necessary to qualify the TESLA option and to demonstrate the possibility of operating a free electron laser based on the Self-Amplified-Spontaneous-Emission principle. Overview and status of the facility will be given. Plans for the future use of the linac are presented. (R.P.)

  8. A multi-tiered architecture for content retrieval in mobile peer-to-peer networks.

    Science.gov (United States)

    2012-01-01

    In this paper, we address content retrieval in Mobile Peer-to-Peer (P2P) Networks. We design a multi-tiered architecture for content : retrieval, where at Tier 1, we design a protocol for content similarity governed by a parameter that trades accu...

  9. 41 CFR 105-68.455 - What may I do if a lower tier participant fails to disclose the information required under § 105...

    Science.gov (United States)

    2010-07-01

    ... Regulations System (Continued) GENERAL SERVICES ADMINISTRATION Regional Offices-General Services... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What may I do if a lower tier participant fails to disclose the information required under § 105-68.355 to the next higher tier...

  10. New off-road engines for TIER 4 final; Neue Offroad-Motoren fuer Tier 4 final

    Energy Technology Data Exchange (ETDEWEB)

    Adt, Hans-Ulrich; Lehmann, Henrik [MTU Friedrichshafen GmbH, Friedrichshafen (Germany); Herter, Yvonne; Weidler, Alexander [Daimler AG, Stuttgart (Germany). Baureihe 1000

    2013-03-15

    To meet the off-road emission standards EU IV and EPA Tier 4 final, as of 2014 the Tognum Group will be offering newly developed engines of the Series 1000 to 1500. These MTU brand diesel engines deliver outputs ranging from 100 to 460 kW and are designed to power agricultural and forestry machinery and construction as well as special-purpose machinery. (orig.)

  11. Wasted? Managing Decline and Marketing Difference in Third Tier Cities

    Directory of Open Access Journals (Sweden)

    Tara BRABAZON

    2012-06-01

    Full Text Available Third-tier cities are neglected in the research literature. Global and second-tier cities provide the positive, proactive applications of city imaging and creative industries strategies. However, small cities – particularly those who reached their height and notoriety through the industrial revolution – reveal few strategies for stability, let alone growth. This study investigates an unusual third-tier city: Oshawa in Ontario Canada. Known as the home of General Motors, its recent economic and social development has been tethered to the arrival of a new institution of higher education: the University of Ontario Institute of Technology. Yet this article confirms that simply opening a university is not enough to commence regeneration or renewal, particularly if an institution is imposed on unwilling residents. Therefore, an alternative strategy – involving geosocial networking – offers a way for local businesses and organizations to attract customers and provide a digital medication to analogue injustice and decay.

  12. A Two-Tiered Model for Analyzing Library Web Site Usage Statistics, Part 1: Web Server Logs.

    Science.gov (United States)

    Cohen, Laura B.

    2003-01-01

    Proposes a two-tiered model for analyzing web site usage statistics for academic libraries: one tier for library administrators that analyzes measures indicating library use, and a second tier for web site managers that analyzes measures aiding in server maintenance and site design. Discusses the technology of web site usage statistics, and…

  13. Measuring Spatiality in Infrastructure and Development of High School Education in Hooghly District of West Bengal, India

    Directory of Open Access Journals (Sweden)

    Shovan Ghosh

    2018-06-01

    Full Text Available An increasing access and enrolment do not necessarily ensure school effectiveness or educational progress. They are, of course, other parameters of development of education, rather than being measures of standards of quality education. The present paper opts to scrutinize whether infrastructural development in schools at all ensures good educational development or not. To accomplish this, Education Infrastructural Index has been prepared through Access, Facility and Teacher Index whereas a combination of Enrollment Index and Literacy Index gave rise Educational Development Index. The study reveals that accessibility factor begets a division within rural spaces in the form of backward rural, rural and prosperous rural that manifests through the availability of the teachers and facilities. In the urban areas, wherein accessibility is not a matter of concern, facilities and teachers matter in making difference between the less developed and developed urban areas. The higher Educational Development Index at the non-rural areas indicates town- centric nature of the development of our educational system. Superimposition of the infrastructural and developmental parameters revealed that good infrastructure does not always ensure good educational achievement. In the light of these backdrops, the key purpose of this article is to measuring spatiality in infrastructure and development of high school education in Hooghly District of West Bengal, India.

  14. Concepts and procedures for mapping food and health research infrastructure

    DEFF Research Database (Denmark)

    Brown, Kerry A.; Timotijević, Lada; Geurts, Marjolein

    2017-01-01

    be achieved in the area of food and health has, to date, been unclear. Scope and approach This commentary paper presents examples of the types of food and health research facilities, resources and services available in Europe. Insights are provided on the challenge of identifying and classifying research...... infrastructure. In addition, suggestions are made for the future direction of food and health research infrastructure in Europe. These views are informed by the EuroDISH project, which mapped research infrastructure in four areas of food and health research: Determinants of dietary behaviour; Intake of foods....../nutrients; Status and functional markers of nutritional health; Health and disease risk of foods/nutrients. Key findings and conclusion There is no objective measure to identify or classify research infrastructure. It is therefore, difficult to operationalise this term. EuroDISH demonstrated specific challenges...

  15. Strengthening of Organizational Infrastructure for Meeting IAEA Nuclear Safeguards Obligations: Bangladesh Perspective

    International Nuclear Information System (INIS)

    Mollah, A.S.

    2010-01-01

    Safeguards are arrangements to account for and control the use of nuclear materials. This verification is a key element in the international system which ensures that uranium in particular is used only for peaceful purposes. The only nuclear reactor in Bangladesh achieved critically on September 14, 1986. Reactor Operation and Maintenance Unit routinely carries out certain international obligations which need to undertake as signatory of different treaties, agreements and protocols in the international safeguards regime. Pursuant to the relevant articles of these agreements/protocols, the reactor and associated facilities of Bangladesh (Facility code: BDA- and BDZ-) are physically inspected by the designated IAEA safeguards inspectors. The Bangladesh Atomic Energy Commission (BAEC) has recently created a new division called 'Nuclear Safeguards and Security Division' for enhancing the safeguards activities as per international obligations. This division plays a leading role in the planning, implementation, and evaluation of the BAEC's nuclear safeguards and nuclear security activities. This division is actively working with USDOE, IAEA and EU to enhance the nuclear safeguards and security activities in the following areas: - Analysis of nuclear safeguards related reports of 3 MW TRIGA Mark-II research reactor; - Upgrading of physical protection system of 3 MW TRIGA Mark-II research reactor, gamma irradiation facilities, central radioactive storage and processing facility and different radiation oncology facilities of Bangladesh under GTRI programme; - Supervision for installation of radiation monitoring system of the Chittagong port under USDOE Megaports Initiative Programmes for detection of illicit trafficking of nuclear and radioactive materials; - Development of laboratory capabilities for analysis of nuclear safeguards related samples; - Planning for development of organizational infrastructure to carry out safeguards related activities under IAEA different

  16. Energy Systems Integration Facility (ESIF) Facility Stewardship Plan: Revision 2.1

    Energy Technology Data Exchange (ETDEWEB)

    Torres, Juan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Anderson, Art [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-02

    The U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE), has established the Energy Systems Integration Facility (ESIF) on the campus of the National Renewable Energy Laboratory (NREL) and has designated it as a DOE user facility. This 182,500-ft2 research facility provides state-of-the-art laboratory and support infrastructure to optimize the design and performance of electrical, thermal, fuel, and information technologies and systems at scale. This Facility Stewardship Plan provides DOE and other decision makers with information about the existing and expected capabilities of the ESIF and the expected performance metrics to be applied to ESIF operations. This plan is a living document that will be updated and refined throughout the lifetime of the facility.

  17. Rôle et limites des tiers-lieux dans la fabrique des villes contemporaines

    OpenAIRE

    Besson, Raphaël

    2017-01-01

    La notion de tiers-lieux se développe de manière essentiellement empirique. Elle recouvre des réalités multiples, comme des projets de coworking spaces, de living labs et de fab labs. Certains tiers-lieux s’intéressent tout particulièrement à la ville et aux nouvelles conditions de la fabrique urbaine. En s’appuyant sur des méthodes d’innovation ouverte et le potentiel des outils numériques, ces tiers-lieux défendent l’idée d’un urbanisme qui ne soit plus le patrimoine exclusif d’experts, mai...

  18. Assessment of socio-economic potential of regions for placement of the logistic infrastructure objects

    Directory of Open Access Journals (Sweden)

    Aleksandr Nelevich Rakhmangulov

    2014-06-01

    Full Text Available Currently, at the regional markets, there is a disproportion between the growing demand for transportation and logistics services and the availability of facilities needed for their implementation, which is because the high logistics costs and does not meet the strategic objectives of the country to create a common economic space. The article describes the system of market factors that have the most significant influence on the distribution of logistics facilities. Study and evaluation of potential changes in the region of logistics facility disposition are proposed to perform using simulation techniques and statistical data analysis. The article presents the engineered multivariate statistical models that control the kind and effect of correlation between socio-economic development factors of regions, as well as a simulation model, which allows to assess the dynamics of these factors and predict demand for logistics infrastructure facilities. The choice of region (subject dislocation of the logistics center is proposed to realize by the developed technique based on the calculation of the integrated index that takes into account differences in the level of socio-economic and infrastructural development of the regions. This technique in conjunction with a simulation model is applicable to a variety of administrative and territorial levels (region, city and allows to take into account both the current demand in the logistics infrastructure and demand dynamics. The technique given in the article can be used to assess the level of attractiveness of the Russian Federation in the development of public and private investment projects for the development of logistics infrastructure

  19. The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment

    International Nuclear Information System (INIS)

    Brun, R; Duellmann, D; Ganis, G; Janyst, L; Peters, A J; Rademakers, F; Sindrilaru, E; Hanushevsky, A

    2011-01-01

    The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.

  20. Novel two-tiered approach of ecological risk assessment for pesticide mixtures based on joint effects.

    Science.gov (United States)

    Tian, Dayong; Mao, Haichen; Lv, Huichao; Zheng, Yong; Peng, Conghu; Hou, Shaogang

    2018-02-01

    Ecological risk assessments for mixtures have attracted considerable attention. In this study, 38 pesticides in the real environment were taken as objects and their toxicities to different organisms from three trophic levels were employed to assess the ecological risk of the mixture. The first tier assessment was based on the CA effect and the obtained sum of risk quotients (SRQ species-CA ) were 3.06-9.22. The second tier assessment was based on non-CA effects and the calculated SRQ species-TU are 5.37-9.29 using joint effects (TU sum ) as modified coefficients, which is higher than SRQ species-CA and indicates that ignoring joint effects might run the risk of underestimating the actual impact of pesticide mixtures. Due to the influences of synergistic and antagonistic effects, risk contribution of components to mixture risks based on non-CA effects are different from those based on the CA effect. Moreover, it was found that the top 8 dominating components explained 95.5%-99.8% of mixture risks in this study. The dominating components are similar in the two tiers for a given species. Accordingly, a novel two-tiered approach was proposed to assess the ecological risks of mixtures based on joint effects. This study provides new insights for ecological risk assessments with the consideration of joint effects of components in the pesticide mixtures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Using a Two-Tier Test to Assess Students' Understanding and Alternative Conceptions of Cyber Copyright Laws

    Science.gov (United States)

    Chou, Chien; Chan, Pei-Shan; Wu, Huan-Chueh

    2007-01-01

    The purpose of this study is to explore students' understanding of cyber copyright laws. This study developed a two-tier test with 10 two-level multiple-choice questions. The first tier presented a real-case scenario and asked whether the conduct was acceptable whereas the second-tier provided reasons to justify the conduct. Students in Taiwan…

  2. A Voyage to Arcturus: A model for automated management of a WLCG Tier-2 facility

    International Nuclear Information System (INIS)

    Roy, Gareth; Crooks, David; Mertens, Lena; Mitchell, Mark; Skipsey, Samuel Cadellin; Britton, David; Purdie, Stuart

    2014-01-01

    With the current trend towards 'On Demand Computing' in big data environments it is crucial that the deployment of services and resources becomes increasingly automated. Deployment based on cloud platforms is available for large scale data centre environments but these solutions can be too complex and heavyweight for smaller, resource constrained WLCG Tier-2 sites. Along with a greater desire for bespoke monitoring and collection of Grid related metrics, a more lightweight and modular approach is desired. In this paper we present a model for a lightweight automated framework which can be use to build WLCG grid sites, based on 'off the shelf' software components. As part of the research into an automation framework the use of both IPMI and SNMP for physical device management will be included, as well as the use of SNMP as a monitoring/data sampling layer such that more comprehensive decision making can take place and potentially be automated. This could lead to reduced down times and better performance as services are recognised to be in a non-functional state by autonomous systems.

  3. Resolving the organization of the third tier visual cortex in primates: a hypothesis-based approach.

    Science.gov (United States)

    Angelucci, Alessandra; Rosa, Marcello G P

    2015-01-01

    As highlighted by several contributions to this special issue, there is still ongoing debate about the number, exact location, and boundaries of the visual areas located in cortex immediately rostral to the second visual area (V2), i.e., the "third tier" visual cortex, in primates. In this review, we provide a historical overview of the main ideas that have led to four models of third tier cortex organization, which are at the center of today's debate. We formulate specific predictions of these models, and compare these predictions with experimental evidence obtained primarily in New World primates. From this analysis, we conclude that only one of these models (the "multiple-areas" model) can accommodate the breadth of available experimental evidence. According to this model, most of the third tier cortex in New World primates is occupied by two distinct areas, both representing the full contralateral visual quadrant: the dorsomedial area (DM), restricted to the dorsal half of the third visual complex, and the ventrolateral posterior area (VLP), occupying its ventral half and a substantial fraction of its dorsal half. DM belongs to the dorsal stream of visual processing, and overlaps with macaque parietooccipital (PO) area (or V6), whereas VLP belongs to the ventral stream and overlaps considerably with area V3 proposed by others. In contrast, there is substantial evidence that is inconsistent with the concept of a single elongated area V3 lining much of V2. We also review the experimental evidence from macaque monkey and humans, and propose that, once the data are interpreted within an evolutionary-developmental context, these species share a homologous (but not necessarily identical) organization of the third tier cortex as that observed in New World monkeys. Finally, we identify outstanding issues, and propose experiments to resolve them, highlighting in particular the need for more extensive, hypothesis-driven investigations in macaque and humans.

  4. ATLAS Tier-2 monitoring system for the German cloud

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg; Quadt, Arnulf; Weber, Pavel [II. Physikalisches Institut, Georg-August-Universitaet, Goettingen (Germany)

    2011-07-01

    The ATLAS tier centers in Germany provide their computing resources for the ATLAS experiment. The stable and sustainable operation of this so-called DE-cloud heavily relies on effective monitoring of the Tier-1 center GridKa and its associated Tier-2 centers. Central and local grid information services constantly collect and publish the status information from many computing resources and sites. The cloud monitoring system discussed in this presentation evaluates the information related to different cloud resources and provides a coherent and comprehensive view of the cloud. The main monitoring areas covered by the tool are data transfers, cloud software installation, site batch systems, Service Availability Monitoring (SAM). The cloud monitoring system consists of an Apache-based Python application, which retrieves the information and publishes it on the generated HTML web page. This results in an easy-to-use web interface for the limited number of sites in the cloud with fast and efficient access to the required information starting from a high level summary for the whole cloud to detailed diagnostics for the single site services. This approach provides the efficient identification of correlated site problems and simplifies the administration on both cloud and site level.

  5. Technical Meeting on Existing and Proposed Experimental Facilities for Fast Neutron Systems. Working Material

    International Nuclear Information System (INIS)

    2013-01-01

    The objective of the TM on “Existing and proposed experimental facilities for fast neutron systems” was threefold: 1) presenting and exchanging information about existing and planned experimental facilities in support of the development of innovative fast neutron systems; 2) allow creating a catalogue of existing and planned experimental facilities currently operated/developed within national or international fast reactors programmes; 3) once a clear picture of the existing experimental infrastructures is defined, new experimental facilities are discussed and proposed, on the basis of the identified R&D needs

  6. Energy Transmission and Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Mathison, Jane

    2012-12-31

    The objective of Energy Transmission and Infrastructure Northern Ohio (OH) was to lay the conceptual and analytical foundation for an energy economy in northern Ohio that will: • improve the efficiency with which energy is used in the residential, commercial, industrial, agricultural, and transportation sectors for Oberlin, Ohio as a district-wide model for Congressional District OH-09; • identify the potential to deploy wind and solar technologies and the most effective configuration for the regional energy system (i.e., the ratio of distributed or centralized power generation); • analyze the potential within the district to utilize farm wastes to produce biofuels; • enhance long-term energy security by identifying ways to deploy local resources and building Ohio-based enterprises; • identify the policy, regulatory, and financial barriers impeding development of a new energy system; and • improve energy infrastructure within Congressional District OH-09. This objective of laying the foundation for a renewable energy system in Ohio was achieved through four primary areas of activity: 1. district-wide energy infrastructure assessments and alternative-energy transmission studies; 2. energy infrastructure improvement projects undertaken by American Municipal Power (AMP) affiliates in the northern Ohio communities of Elmore, Oak Harbor, and Wellington; 3. Oberlin, OH-area energy assessment initiatives; and 4. a district-wide conference held in September 2011 to disseminate year-one findings. The grant supported 17 research studies by leading energy, policy, and financial specialists, including studies on: current energy use in the district and the Oberlin area; regional potential for energy generation from renewable sources such as solar power, wind, and farm-waste; energy and transportation strategies for transitioning the City of Oberlin entirely to renewable resources and considering pedestrians, bicyclists, and public transportation as well as drivers

  7. 9 CFR 305.3 - Sanitation and adequate facilities.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Sanitation and adequate facilities. 305.3 Section 305.3 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... OF VIOLATION § 305.3 Sanitation and adequate facilities. Inspection shall not be inaugurated if an...

  8. 75 FR 33389 - TierOne Bank Lincoln, Nebraska; Notice of Appointment of Receiver

    Science.gov (United States)

    2010-06-11

    ... DEPARTMENT OF THE TREASURY Office of Thrift Supervision TierOne Bank Lincoln, Nebraska; Notice of... the Home Owners' Loan Act, the Office of Thrift Supervision has duly appointed the Federal Deposit Insurance Corporation as sole Receiver for TierOne Bank, Lincoln, Nebraska, (OTS No. 03309), on June 4, 2010...

  9. 20 CFR 209.14 - Report of separation allowances subject to tier II taxation.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Report of separation allowances subject to tier II taxation. 209.14 Section 209.14 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER... separation allowances subject to tier II taxation. For any employee who is paid a separation payment, the...

  10. Scaling analysis for the OSU AP600 test facility (APEX)

    International Nuclear Information System (INIS)

    Reyes, J.N.

    1998-01-01

    In this paper, the authors summarize the key aspects of a state-of-the-art scaling analysis (Reyes et al. (1995)) performed to establish the facility design and test conditions for the advanced plant experiment (APEX) at Oregon State University (OSU). This scaling analysis represents the first, and most comprehensive, application of the hierarchical two-tiered scaling (H2TS) methodology (Zuber (1991)) in the design of an integral system test facility. The APEX test facility, designed and constructed on the basis of this scaling analysis, is the most accurate geometric representation of a Westinghouse AP600 nuclear steam supply system. The OSU APEX test facility has served to develop an essential component of the integral system database used to assess the AP600 thermal hydraulic safety analysis computer codes. (orig.)

  11. Research Note on the Energy Infrastructure Attack Database (EIAD

    Directory of Open Access Journals (Sweden)

    Jennifer Giroux

    2013-12-01

    Full Text Available The January 2013 attack on the In Amenas natural gas facility drew international attention. However this attack is part of a portrait of energy infrastructure targeting by non-state actors that spans the globe. Data drawn from the Energy Infrastructure Attack Database (EIAD shows that in the last decade there were, on average, nearly 400 annual attacks carried out by armed non-state actors on energy infrastructure worldwide, a figure that was well under 200 prior to 1999. This data reveals a global picture whereby violent non-state actors target energy infrastructures to air grievances, communicate to governments, impact state economic interests, or capture revenue in the form of hijacking, kidnapping ransoms, theft. And, for politically motivated groups, such as those engaged in insurgencies, attacking industry assets garners media coverage serving as a facilitator for international attention. This research note will introduce EIAD and position its utility within various research areas where the targeting of energy infrastructure, or more broadly energy infrastructure vulnerability, has been addressed, either directly or indirectly. We also provide a snapshot of the initial analysis of the data between 1980-2011, noting specific temporal and spatial trends, and then conclude with a brief discussion on the contribution of EIAD, highlighting future research trajectories. 

  12. Epidemiology program at the Savannah River Plant: a tiered approach to research

    International Nuclear Information System (INIS)

    Fayerweather, W.E.

    1984-01-01

    The epidemiology program at the Savannah River Plant (SRP) uses a tiered approach to research. As research progresses from lower through higher tiers, there is a corresponding increase in study complexity, cost, and time commitment. The approach provides a useful strategy for directing research efforts towards those employee subgroups and health endpoints that can benefit most from more in-depth studies. A variety of potential exposures, health endpoints, and employee subgroups have been and continued to be studied by research groups such as Oak Ridge Associated Universities, Los Alamos National Laboratories, Centers for Disease Control, SRP's Occupational Health Technology, and the Du Pont Company's corporate Epidemiology Section. These studies are discussed in the context of a tiered approach to research

  13. Brandenburg 3D - a comprehensive 3D Subsurface Model, Conception of an Infrastructure Node and a Web Application

    Science.gov (United States)

    Kerschke, Dorit; Schilling, Maik; Simon, Andreas; Wächter, Joachim

    2014-05-01

    The Energiewende and the increasing scarcity of raw materials will lead to an intensified utilization of the subsurface in Germany. Within this context, geological 3D modeling is a fundamental approach for integrated decision and planning processes. Initiated by the development of the European Geospatial Infrastructure INSPIRE, the German State Geological Offices started digitizing their predominantly analog archive inventory. Until now, a comprehensive 3D subsurface model of Brandenburg did not exist. Therefore the project B3D strived to develop a new 3D model as well as a subsequent infrastructure node to integrate all geological and spatial data within the Geodaten-Infrastruktur Brandenburg (Geospatial Infrastructure, GDI-BB) and provide it to the public through an interactive 2D/3D web application. The functionality of the web application is based on a client-server architecture. Server-sided, all available spatial data is published through GeoServer. GeoServer is designed for interoperability and acts as the reference implementation of the Open Geospatial Consortium (OGC) Web Feature Service (WFS) standard that provides the interface that allows requests for geographical features. In addition, GeoServer implements, among others, the high performance certified compliant Web Map Service (WMS) that serves geo-referenced map images. For publishing 3D data, the OGC Web 3D Service (W3DS), a portrayal service for three-dimensional geo-data, is used. The W3DS displays elements representing the geometry, appearance, and behavior of geographic objects. On the client side, the web application is solely based on Free and Open Source Software and leans on the JavaScript API WebGL that allows the interactive rendering of 2D and 3D graphics by means of GPU accelerated usage of physics and image processing as part of the web page canvas without the use of plug-ins. WebGL is supported by most web browsers (e.g., Google Chrome, Mozilla Firefox, Safari, and Opera). The web

  14. Studies Related to the Oregon State University High Temperature Test Facility: Scaling, the Validation Matrix, and Similarities to the Modular High Temperature Gas-Cooled Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Richard R. Schultz; Paul D. Bayless; Richard W. Johnson; William T. Taitano; James R. Wolf; Glenn E. McCreery

    2010-09-01

    The Oregon State University (OSU) High Temperature Test Facility (HTTF) is an integral experimental facility that will be constructed on the OSU campus in Corvallis, Oregon. The HTTF project was initiated, by the U.S. Nuclear Regulatory Commission (NRC), on September 5, 2008 as Task 4 of the 5 year High Temperature Gas Reactor Cooperative Agreement via NRC Contract 04-08-138. Until August, 2010, when a DOE contract was initiated to fund additional capabilities for the HTTF project, all of the funding support for the HTTF was provided by the NRC via their cooperative agreement. The U.S. Department of Energy (DOE) began their involvement with the HTTF project in late 2009 via the Next Generation Nuclear Plant project. Because the NRC interests in HTTF experiments were only centered on the depressurized conduction cooldown (DCC) scenario, NGNP involvement focused on expanding the experimental envelope of the HTTF to include steady-state operations and also the pressurized conduction cooldown (PCC). Since DOE has incorporated the HTTF as an ingredient in the NGNP thermal-fluids validation program, several important outcomes should be noted: 1. The reference prismatic reactor design, that serves as the basis for scaling the HTTF, became the modular high temperature gas-cooled reactor (MHTGR). The MHTGR has also been chosen as the reference design for all of the other NGNP thermal-fluid experiments. 2. The NGNP validation matrix is being planned using the same scaling strategy that has been implemented to design the HTTF, i.e., the hierarchical two-tiered scaling methodology developed by Zuber in 1991. Using this approach a preliminary validation matrix has been designed that integrates the HTTF experiments with the other experiments planned for the NGNP thermal-fluids verification and validation project. 3. Initial analyses showed that the inherent power capability of the OSU infrastructure, which only allowed a total operational facility power capability of 0.6 MW, is

  15. 41 CFR 105-68.330 - What requirements must I pass down to persons at lower tiers with whom I intend to do business?

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What requirements must I pass down to persons at lower tiers with whom I intend to do business? 105-68.330 Section 105-68.330... Business with Other Persons § 105-68.330 What requirements must I pass down to persons at lower tiers with...

  16. Understanding the infrastructure of European Research Infrastructures

    DEFF Research Database (Denmark)

    Lindstrøm, Maria Duclos; Kropp, Kristoffer

    2017-01-01

    European Research Infrastructure Consortia (ERIC) are a new form of legal and financial framework for the establishment and operation of research infrastructures in Europe. Despite their scope, ambition, and novelty, the topic has received limited scholarly attention. This article analyses one ER....... It is also a promising theoretical framework for addressing the relationship between the ERIC construct and the large diversity of European Research Infrastructures.......European Research Infrastructure Consortia (ERIC) are a new form of legal and financial framework for the establishment and operation of research infrastructures in Europe. Despite their scope, ambition, and novelty, the topic has received limited scholarly attention. This article analyses one ERIC...... became an ERIC using the Bowker and Star’s sociology of infrastructures. We conclude that focusing on ERICs as a European standard for organising and funding research collaboration gives new insights into the problems of membership, durability, and standardisation faced by research infrastructures...

  17. 3D STREAMING PROTOCOLS FOR SPATIAL DATA INFRASTRUCTURE: A BRIEF REVIEW

    Directory of Open Access Journals (Sweden)

    C. B. Siew

    2016-09-01

    Full Text Available Web services utilizations in Spatial Data Infrastructure (SDI have been well established and standardized by Open Geospatial 3D graphics rendering has been a topic of interest among scientific domain from both computer science and geospatial science. Different methods were proposed and discussed in these researches for different domains and applications. Each method provides advantages and trade-offs. Some methods proposed image based rendering for 3D graphics and ultimately. This paper attempts to discuss several techniques from past researches and attempts to propose another method inspired from these techniques, customized for 3D SDI its data workflow use cases.

  18. A Worldwide Production Grid Service Built on EGEE and OSG Infrastructures – Lessons Learnt and Long-term Requirements

    CERN Document Server

    Shiers, J; Dimou, M; CERN. Geneva. IT Department

    2007-01-01

    Using the Grid Infrastructures provided by EGEE, OSG and others, a worldwide production service has been built that provides the computing and storage needs for the 4 main physics collaborations at CERN's Large Hadron Collider (LHC). The large number of users, their geographical distribution and the very high service availability requirements make this experience of Grid usage worth studying for the sake of a solid and scalable future operation. This service must cater for the needs of thousands of physicists in hundreds of institutes in tens of countries. A 24x7 service with availability of up to 99% is required with major service responsibilities at each of some ten "Tier1" and of the order of one hundred "Tier2" sites. Such a service - which has been operating for some 2 years and will be required for at least an additional decade - has required significant manpower and resource investments from all concerned and is considered a major achievement in the field of Grid computing. We describe the main lessons...

  19. Analysis of Operational Data: A Proof of Concept for Assessing Electrical Infrastructure Impact

    Science.gov (United States)

    2015-11-01

    Project Number 30, “Analysis of Operational Data for Tactical Situational Understanding” ERDC TR-15-10 ii Abstract Infrastructure variables required...for a community or society to function include basic facilities, services, and installations; and these variables can impact many aspects of daily...life. The structure and functionality of the electrical grid in an operating area can affect multiple operational varia - bles. Other infrastructure

  20. Detection and Identification of People at a Critical Infrastructure Facilities of Trafic Buildings

    Directory of Open Access Journals (Sweden)

    Rastislav PIRNÍK

    2014-12-01

    Full Text Available This paper focuses on identification of persons entering objects of crucial infrastructure and subsequent detection of movement in parts of objects. It explains some of the technologies and approaches to processing specific image information within existing building apparatus. The article describes the proposed algorithm for detection of persons. It brings a fresh approach to detection of moving objects (groups of persons involved in enclosed areas focusing on securing freely accessible places in buildings. Based on the designed algorithm of identification with presupposed utilisation of 3D application, motion trajectory of persons in delimited space can be automatically identified. The application was created in opensource software tool using the OpenCV library.

  1. When Money Matters: School Infrastructure Funding and Student Achievement

    Science.gov (United States)

    Crampton, Faith E.; Thompson, David C.

    2011-01-01

    Today's school business officials are more aware than ever of the importance of making every dollar count. As they scour their budgets for possible savings, they may be tempted to reduce investment in school infrastructure, perhaps by deferring maintenance, renovations, and replacement of outdated facilities. However, school business officials…

  2. Infrastructure package. Draft position statement

    International Nuclear Information System (INIS)

    Mascarin, Guillaume

    2011-01-01

    The European Commission published on 17 November 2010 the communication entitled: 'COM(2010)0677 - Energy infrastructure priorities for 2020 and beyond - A Blueprint for an integrated European energy network'. It aims at ensuring that strategic energy networks and storage facilities are completed by 2020. To this end, the EC has identified 12 priority corridors and areas covering electricity, gas, oil and carbon dioxide transport networks. It proposes a regime of 'common interest' for projects contributing to implementing these priorities and having obtained this label. The UFE, the professional association for the electricity sector, has analyzed the EC communication and presents its remarks in this document. UFE's focusses its analysis on 5 key points: 1. Towards a European 'strategic planning' tool for future investment; 2. The correlation between networks and security of Supply (production capacities, energy mix); 3. Financing; 4. Acceptability of projects; 5. Accelerate authorisation procedures

  3. Green infrastructure in high-rise residential development on steep slopes in city of Vladivostok

    Science.gov (United States)

    Kopeva, Alla; Ivanova, Olga; Khrapko, Olga

    2018-03-01

    The purpose of this study is to identify the facilities of green infrastructure that are able to improve living conditions in an urban environment in high-rise residential apartments buildings on steep slopes in the city of Vladivostok. Based on the analysis of theoretical sources and practices that can be observed in the world, green infrastructure facilities have been identified. These facilities meet the criteria of the sustainable development concept, and can be used in the city of Vladivostok. They include green roofs, green walls, and greening of disturbed slopes. All the existing high-rise apartments buildings situated on steep slopes in the city of Vladivostok, have been studied. It is concluded that green infrastructure is necessary to be used in new projects connected with designing and constructing of residential apartments buildings on steep slopes, as well as when upgrading the projects that have already been implemented. That will help to regulate the ecological characteristics of the sites. The results of the research can become a basis for increasing the sustainability of the habitat, and will facilitate the adoption of decisions in the field of urban design and planning.

  4. Green infrastructure in high-rise residential development on steep slopes in city of Vladivostok

    Directory of Open Access Journals (Sweden)

    Kopeva Alla

    2018-01-01

    Full Text Available The purpose of this study is to identify the facilities of green infrastructure that are able to improve living conditions in an urban environment in high-rise residential apartments buildings on steep slopes in the city of Vladivostok. Based on the analysis of theoretical sources and practices that can be observed in the world, green infrastructure facilities have been identified. These facilities meet the criteria of the sustainable development concept, and can be used in the city of Vladivostok. They include green roofs, green walls, and greening of disturbed slopes. All the existing high-rise apartments buildings situated on steep slopes in the city of Vladivostok, have been studied. It is concluded that green infrastructure is necessary to be used in new projects connected with designing and constructing of residential apartments buildings on steep slopes, as well as when upgrading the projects that have already been implemented. That will help to regulate the ecological characteristics of the sites. The results of the research can become a basis for increasing the sustainability of the habitat, and will facilitate the adoption of decisions in the field of urban design and planning.

  5. An Energy-Efficient Multi-Tier Architecture for Fall Detection Using Smartphones.

    Science.gov (United States)

    Guvensan, M Amac; Kansiz, A Oguz; Camgoz, N Cihan; Turkmen, H Irem; Yavuz, A Gokhan; Karsligil, M Elif

    2017-06-23

    Automatic detection of fall events is vital to providing fast medical assistance to the causality, particularly when the injury causes loss of consciousness. Optimization of the energy consumption of mobile applications, especially those which run 24/7 in the background, is essential for longer use of smartphones. In order to improve energy-efficiency without compromising on the fall detection performance, we propose a novel 3-tier architecture that combines simple thresholding methods with machine learning algorithms. The proposed method is implemented on a mobile application, called uSurvive, for Android smartphones. It runs as a background service and monitors the activities of a person in daily life and automatically sends a notification to the appropriate authorities and/or user defined contacts when it detects a fall. The performance of the proposed method was evaluated in terms of fall detection performance and energy consumption. Real life performance tests conducted on two different models of smartphone demonstrate that our 3-tier architecture with feature reduction could save up to 62% of energy compared to machine learning only solutions. In addition to this energy saving, the hybrid method has a 93% of accuracy, which is superior to thresholding methods and better than machine learning only solutions.

  6. 76 FR 18260 - Announcement Regarding Pennsylvania Triggering “Off” Tier Four of Emergency Unemployment...

    Science.gov (United States)

    2011-04-01

    ... Triggering ``Off'' Tier Four of Emergency Unemployment Compensation 2008 (EUC08). AGENCY: Employment and... ``off'' Tier Four of Emergency Unemployment Compensation 2008 (EUC08). Public Law 111-312 extended... the EUC08 program for qualified unemployed workers claiming benefits in high unemployment states. The...

  7. First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081940; The ATLAS collaboration; Barberis, Dario; Gallas, Elizabeth; Rybkin, Grigori; Rinaldi, Lorenzo; Aperio Bella, Ludovica; Buttinger, William

    2017-01-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF are effectively read by the software as binary objects, this class of data appears ideal for testing the proposed Run 3 conditions data infrastructure now in development. This paper describes this implementation as well as the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  8. First Use of LHC Run 3 Conditions Database Infrastructure for Auxiliary Data Files in ATLAS

    CERN Document Server

    Aperio Bella, Ludovica; The ATLAS collaboration

    2016-01-01

    Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF data is effectively read by the software as binary objects, makes this class of data ideal for testing the proposed Run 3 Conditions data infrastructure now in development. This paper will describe this implementation as well as describe the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.

  9. Coalbed methane : evaluating pipeline and infrastructure requirements to get gas to market

    International Nuclear Information System (INIS)

    Murray, B.

    2005-01-01

    This Power Point presentation evaluated pipeline and infrastructure requirements for the economic production of coalbed methane (CBM) gas. Reports have suggested that capital costs for CBM production can be minimized by leveraging existing oil and gas infrastructure. By using existing plant facilities, CBM producers can then tie in to existing gathering systems and negotiate third party fees, which are less costly than building new pipelines. Many CBM wells can be spaced at an equal distance to third party gathering systems and regulated transmission meter stations and pipelines. Facility cost sharing, and contracts with pipeline companies for compression can also lower initial infrastructure costs. However, transmission pressures and direct connect options for local distribution should always be considered during negotiations. The use of carbon dioxide (CO 2 ) commingling services was also recommended. A map of the North American gas network was provided, as well as details of Alberta gas transmission and coal pipeline overlays. Maps of various coal zones in Alberta were provided, as well as a map of North American pipelines. refs., tabs., figs

  10. A systems relations model for Tier 2 early intervention child mental health services with schools: an exploratory study.

    Science.gov (United States)

    van Roosmalen, Marc; Gardner-Elahi, Catherine; Day, Crispin

    2013-01-01

    Over the last 15 years, policy initiatives have aimed at the provision of more comprehensive Child and Adolescent Mental Health care. These presented a series of new challenges in organising and delivering Tier 2 child mental health services, particularly in schools. This exploratory study aimed to examine and clarify the service model underpinning a Tier 2 child mental health service offering school-based mental health work. Using semi-structured interviews, clinician descriptions of operational experiences were gathered. These were analysed using grounded theory methods. Analysis was validated by respondents at two stages. A pathway for casework emerged that included a systemic consultative function, as part of an overall three-function service model, which required: (1) activity as a member of the multi-agency system; (2) activity to improve the system working around a particular child; and (3) activity to universally develop a Tier 1 workforce confident in supporting children at risk of or experiencing mental health problems. The study challenged the perception of such a service serving solely a Tier 2 function, the requisite workforce to deliver the service model, and could give service providers a rationale for negotiating service models that include an explicit focus on improving the children's environments.

  11. Spanish ATLAS Tier-2: facing up to LHC Run 2

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Sánchez, Javier; Sanchez Martinez, Victoria; Salt, José; Villaplana Perez, Miguel

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 with respect to Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation on these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  12. Hydrogen infrastructure development in The Netherlands

    International Nuclear Information System (INIS)

    Smit, R.; Weeda, M.; De Groot, A.

    2007-08-01

    Increasingly people think of how a hydrogen energy supply system would look like, and how to build and end up at such a system. This paper presents the work on modelling and simulation of current ideas among Dutch hydrogen stakeholders for a transition towards the widespread use of a hydrogen energy. Based mainly on economic considerations, the ideas about a transition seem viable. It appears that following the introduction of hydrogen in niche applications, the use of locally produced hydrogen from natural gas in stationary and mobile applications can yield an economic advantage when compared to the conventional system, and can thus generate a demand for hydrogen. The demand for hydrogen can develop to such an extent that the construction of a large-scale hydrogen pipeline infrastructure for the transport and distribution of hydrogen produced in large-scale production facilities becomes economically viable. In 2050, the economic viability of a large-scale hydrogen pipeline infrastructure spreads over 20-25 of the 40 regions in which The Netherlands is divided for modelling purposes. Investments in hydrogen pipelines for a fully developed hydrogen infrastructure are estimated to be in the range of 12,000-20,000 million euros

  13. Evaluating Commercial and Private Cloud Services for Facility-Scale Geodetic Data Access, Analysis, and Services

    Science.gov (United States)

    Meertens, C. M.; Boler, F. M.; Ertz, D. J.; Mencin, D.; Phillips, D.; Baker, S.

    2017-12-01

    UNAVCO, in its role as a NSF facility for geodetic infrastructure and data, has succeeded for over two decades using on-premises infrastructure, and while the promise of cloud-based infrastructure is well-established, significant questions about suitability of such infrastructure for facility-scale services remain. Primarily through the GeoSciCloud award from NSF EarthCube, UNAVCO is investigating the costs, advantages, and disadvantages of providing its geodetic data and services in the cloud versus using UNAVCO's on-premises infrastructure. (IRIS is a collaborator on the project and is performing its own suite of investigations). In contrast to the 2-3 year time scale for the research cycle, the time scale of operation and planning for NSF facilities is for a minimum of five years and for some services extends to a decade or more. Planning for on-premises infrastructure is deliberate, and migrations typically take months to years to fully implement. Migrations to a cloud environment can only go forward with similar deliberate planning and understanding of all costs and benefits. The EarthCube GeoSciCloud project is intended to address the uncertainties of facility-level operations in the cloud. Investigations are being performed in a commercial cloud environment (Amazon AWS) during the first year of the project and in a private cloud environment (NSF XSEDE resource at the Texas Advanced Computing Center) during the second year. These investigations are expected to illuminate the potential as well as the limitations of running facility scale production services in the cloud. The work includes running parallel equivalent cloud-based services to on premises services and includes: data serving via ftp from a large data store, operation of a metadata database, production scale processing of multiple months of geodetic data, web services delivery of quality checked data and products, large-scale compute services for event post-processing, and serving real time data

  14. Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations

    Science.gov (United States)

    Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad

    2012-11-01

    This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.

  15. Assessing the nutritional quality of diets of Canadian children and adolescents using the 2014 Health Canada Surveillance Tool Tier System

    OpenAIRE

    Jessri, Mahsa; Nishi, Stephanie K.; L?Abbe, Mary R.

    2016-01-01

    Background Health Canada?s Surveillance Tool (HCST) Tier System was developed in 2014 with the aim of assessing the adherence of dietary intakes with Eating Well with Canada?s Food Guide (EWCFG). HCST uses a Tier system to categorize all foods into one of four Tiers based on thresholds for total fat, saturated fat, sodium, and sugar, with Tier 4 reflecting the unhealthiest and Tier 1 the healthiest foods. This study presents the first application of the HCST to examine (i) the dietary pattern...

  16. Iberian ATLAS Cloud response during the first LHC collisions

    CERN Document Server

    Villaplana, M; The ATLAS collaboration; Borges, G; Borrego, C; Carvalho, J; David, M; Espinal, X; Fernández, A; Gomes, J; González de la Hoz, S; Kaci, M; Lamas, A; Nadal, J; Oliveira, M; Oliver, E; Osuna, C; Pacheco, A; Pardo, JJ; del Peso, J; Salt, J; Sánchez, J; Wolters, H

    2011-01-01

    The computing model of the ATLAS experiment at the LHC (Large Hadron Collider) is based on a tiered hierarchy that ranges from Tier0 (CERN) down to end-user's own resources (Tier3). According to the same computing model, the role of the Tier2s is to provide computing resources for event simulation processing and distributed data analysis. Tier3 centers, on the other hand, are the responsibility of individual institutions to define, fund, deploy and support. In this contribution we report on the operations of the ATLAS Iberian Cloud centers facing data taking and we describe some of the Tier3 facilities currently deployed at the Cloud.

  17. LeaRN: A Collaborative Learning-Research Network for a WLCG Tier-3 Centre

    Science.gov (United States)

    Pérez Calle, Elio

    2011-12-01

    The Department of Modern Physics of the University of Science and Technology of China is hosting a Tier-3 centre for the ATLAS experiment. A interdisciplinary team of researchers, engineers and students are devoted to the task of receiving, storing and analysing the scientific data produced by the LHC. In order to achieve the highest performance and to develop a knowledge base shared by all members of the team, the research activities and their coordination are being supported by an array of computing systems. These systems have been designed to foster communication, collaboration and coordination among the members of the team, both face-to-face and remotely, and both in synchronous and asynchronous ways. The result is a collaborative learning-research network whose main objectives are awareness (to get shared knowledge about other's activities and therefore obtain synergies), articulation (to allow a project to be divided, work units to be assigned and then reintegrated) and adaptation (to adapt information technologies to the needs of the group). The main technologies involved are Communication Tools such as web publishing, revision control and wikis, Conferencing Tools such as forums, instant messaging and video conferencing and Coordination Tools, such as time management, project management and social networks. The software toolkit has been deployed by the members of the team and it has been based on free and open source software.

  18. LeaRN: A Collaborative Learning-Research Network for a WLCG Tier-3 Centre

    International Nuclear Information System (INIS)

    Calle, Elio Pérez

    2011-01-01

    The Department of Modern Physics of the University of Science and Technology of China is hosting a Tier-3 centre for the ATLAS experiment. A interdisciplinary team of researchers, engineers and students are devoted to the task of receiving, storing and analysing the scientific data produced by the LHC. In order to achieve the highest performance and to develop a knowledge base shared by all members of the team, the research activities and their coordination are being supported by an array of computing systems. These systems have been designed to foster communication, collaboration and coordination among the members of the team, both face-to-face and remotely, and both in synchronous and asynchronous ways. The result is a collaborative learning-research network whose main objectives are awareness (to get shared knowledge about other's activities and therefore obtain synergies), articulation (to allow a project to be divided, work units to be assigned and then reintegrated) and adaptation (to adapt information technologies to the needs of the group). The main technologies involved are Communication Tools such as web publishing, revision control and wikis, Conferencing Tools such as forums, instant messaging and video conferencing and Coordination Tools, such as time management, project management and social networks. The software toolkit has been deployed by the members of the team and it has been based on free and open source software.

  19. Development of safety related technology and infrastructure for safety assessment

    International Nuclear Information System (INIS)

    Venkat Raj, V.

    1997-01-01

    Development and optimum utilisation of any technology calls for the building up of the necessary infrastructure and backup facilities. This is particularly true for a developing country like India and more so for an advanced technology like nuclear technology. Right from the inception of its nuclear power programme, the Indian approach has been to develop adequate infrastructure in various areas such as design, construction, manufacture, installation, commissioning and safety assessment of nuclear plants. This paper deals with the development of safety related technology and the relevant infrastructure for safety assessment. A number of computer codes for safety assessment have been developed or adapted in the areas of thermal hydraulics, structural dynamics etc. These codes have undergone extensive validation through data generated in the experimental facilities set up in India as well as participation in international standard problem exercises. Side by side with the development of the tools for safety assessment, the development of safety related technology was also given equal importance. Many of the technologies required for the inspection, ageing assessment and estimation of the residual life of various components and equipment, particularly those having a bearing on safety, were developed. This paper highlights, briefly, the work carried out in some of the areas mentioned above. (author)

  20. Three-tiered risk stratification model to predict progression in Barrett's esophagus using epigenetic and clinical features.

    Directory of Open Access Journals (Sweden)

    Fumiaki Sato

    2008-04-01

    Full Text Available Barrett's esophagus predisposes to esophageal adenocarcinoma. However, the value of endoscopic surveillance in Barrett's esophagus has been debated because of the low incidence of esophageal adenocarcinoma in Barrett's esophagus. Moreover, high inter-observer and sampling-dependent variation in the histologic staging of dysplasia make clinical risk assessment problematic. In this study, we developed a 3-tiered risk stratification strategy, based on systematically selected epigenetic and clinical parameters, to improve Barrett's esophagus surveillance efficiency.We defined high-grade dysplasia as endpoint of progression, and Barrett's esophagus progressor patients as Barrett's esophagus patients with either no dysplasia or low-grade dysplasia who later developed high-grade dysplasia or esophageal adenocarcinoma. We analyzed 4 epigenetic and 3 clinical parameters in 118 Barrett's esophagus tissues obtained from 35 progressor and 27 non-progressor Barrett's esophagus patients from Baltimore Veterans Affairs Maryland Health Care Systems and Mayo Clinic. Based on 2-year and 4-year prediction models using linear discriminant analysis (area under the receiver-operator characteristic (ROC curve: 0.8386 and 0.7910, respectively, Barrett's esophagus specimens were stratified into high-risk (HR, intermediate-risk (IR, or low-risk (LR groups. This 3-tiered stratification method retained both the high specificity of the 2-year model and the high sensitivity of the 4-year model. Progression-free survivals differed significantly among the 3 risk groups, with p = 0.0022 (HR vs. IR and p<0.0001 (HR or IR vs. LR. Incremental value analyses demonstrated that the number of methylated genes contributed most influentially to prediction accuracy.This 3-tiered risk stratification strategy has the potential to exert a profound impact on Barrett's esophagus surveillance accuracy and efficiency.

  1. 9 CFR 3.79 - Mobile or traveling housing facilities.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Mobile or traveling housing facilities... Transportation of Nonhuman Primates 2 Facilities and Operating Standards § 3.79 Mobile or traveling housing facilities. (a) Heating, cooling, and temperature. Mobile or traveling housing facilities must be...

  2. Current status of irradiation facilities in JRR-3 and JRR-4

    International Nuclear Information System (INIS)

    Hori, Naohiko; Wada, Shigeru; Sasajima, Fumio; Kusunoki, Tsuyoshi

    2006-01-01

    The Department of Research Reactor has operated two research reactors, JRR-3 and JRR-4. These reactors were constructed in the Tokai Research Establishment. Many researchers and engineers use these joint-use facilities. JRR-3 is a light water moderated and cooled, pool type research reactor using low-enriched silicide fuel. JRR-3's maximum thermal power is 20MW. JRR-3 has nine vertical irradiation holes for RI production, nuclear fuels and materials irradiation at reactor core area. JRR-3 has many kinds of irradiation holes in a heavy water tank around the reactor core. These are two hydraulic rabbit irradiation facilities, two pneumatic rabbit irradiation facilities, one activation analysis irradiation facilities, one uniform irradiation facility, one rotating irradiation facility and one capsule irradiation facility. JRR-3 has nine horizontal experimental holes, that are used by many kinds of neutron beam experimental facilities using these holes. JRR-4 is a light water moderated and cooled, swimming pool type research reactor using low-enriched silicide fuel. JRR-4's maximum thermal power is 3.5MW. JRR-4 has five vertical irradiation tubes at reactor core area, three capsule irradiation facilities, one hydraulic rabbit irradiation facility, and one pneumatic rabbit irradiation facility. JRR-4 has a neutron beam hole, and it has used neutron beam experiments, irradiations for activation analysis and medical neutron irradiations. (author)

  3. FY16 ISCP Nuclear Counting Facility Hardware Expansion Summary

    Energy Technology Data Exchange (ETDEWEB)

    Church, Jennifer A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kashgarian, Michaele [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wooddy, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Haslett, Bob [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Torretto, Phil [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-15

    Hardware expansion and detector calibrations were the focus of FY 16 ISCP efforts in the Nuclear Counting Facility. Work focused on four main objectives: 1) Installation, calibration, and validation of 4 additional HPGe gamma spectrometry systems; including two Low Energy Photon Spectrometers (LEPS). 2) Re-Calibration and validation of 3 previously installed gamma-ray detectors, 3) Integration of the new systems into the NCF IT infrastructure, and 4) QA/QC and maintenance of current detector systems.

  4. FY16 ISCP Nuclear Counting Facility Hardware Expansion Summary

    International Nuclear Information System (INIS)

    Church, Jennifer A.; Kashgarian, Michaele; Wooddy, Todd; Haslett, Bob; Torretto, Phil

    2016-01-01

    Hardware expansion and detector calibrations were the focus of FY 16 ISCP efforts in the Nuclear Counting Facility. Work focused on four main objectives: 1) Installation, calibration, and validation of 4 additional HPGe gamma spectrometry systems; including two Low Energy Photon Spectrometers (LEPS). 2) Re-Calibration and validation of 3 previously installed gamma-ray detectors, 3) Integration of the new systems into the NCF IT infrastructure, and 4) QA/QC and maintenance of current detector systems.

  5. XML Based Scientific Data Management Facility

    Science.gov (United States)

    Mehrotra, P.; Zubair, M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of XML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management ,facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.

  6. Post Construction Green Infrastructure Performance Monitoring Parameters and Their Functional Components

    Directory of Open Access Journals (Sweden)

    Thewodros K. Geberemariam

    2016-12-01

    Full Text Available Drainage system infrastructures in most urbanized cities have reached or exceeded their design life cycle and are characterized by running with inadequate capacity. These highly degraded infrastructures are already overwhelmed and continued to impose a significant challenge to the quality of water and ecological systems. With predicted urban growth and climate change the situation is only going to get worse. As a result, municipalities are increasingly considering the concept of retrofitting existing stormwater drainage systems with green infrastructure practices as the first and an important step to reduce stormwater runoff volume and pollutant load inputs into combined sewer systems (CSO and wastewater facilities. Green infrastructure practices include an open green space that can absorb stormwater runoff, ranging from small-scale naturally existing pocket of lands, right-of-way bioswales, and trees planted along the sidewalk as well as large-scale public parks. Despite the growing municipalities’ interest to retrofit existing stormwater drainage systems with green infrastructure, few studies and relevant information are available on their performance and cost-effectiveness. Therefore, this paper aims to help professionals learn about and become familiar with green infrastructure, decrease implementation barriers, and provide guidance for monitoring green infrastructure using the combination of survey questionnaires, meta-narrative and systematic literature review techniques.

  7. Hydrogen Infrastructure Testing and Research Facility Video (Text Version)

    Science.gov (United States)

    grid integration, continuous code improvement, fuel cell vehicle operation, and renewable hydrogen Systems Integration Facility or ESIF. Research projects including H2FIRST, component testing, hydrogen

  8. Teaching Borges's "Garden": A Three-Tiered Approach.

    Science.gov (United States)

    Christensen, Maggie

    2002-01-01

    Describes how "The Garden of Forking Paths" presents teaching challenges that ultimately yield benefits worth the effort for students and instructors. Discusses a three-tiered approach: spy story, family history and character, and ideas of time and timelessness. Concludes that the three layers provide a structure to get the discussion started and…

  9. The home hemodialysis hub: physical infrastructure and integrated governance structure.

    Science.gov (United States)

    Marshall, Mark R; Young, Bessie A; Fox, Sally J; Cleland, Calli J; Walker, Robert J; Masakane, Ikuto; Herold, Aaron M

    2015-04-01

    An effective home hemodialysis program critically depends on adequate hub facilities and support functions and on transparent and accountable organizational processes. The likelihood of optimal service delivery and patient care will be enhanced by fit-for-purpose facilities and implementation of a well-considered governance structure. In this article, we describe the required accommodation and infrastructure for a home hemodialysis program and a generic organizational structure that will support both patient-facing clinical activities and business processes. © 2015 International Society for Hemodialysis.

  10. An algal model for predicting attainment of tiered biological criteria of Maine's streams and rivers

    Science.gov (United States)

    Danielson, Thomas J.; Loftin, Cyndy; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth; Courtemanch, David L.; Drummond, Francis; Davies, Susan

    2012-01-01

    State water-quality professionals developing new biological assessment methods often have difficulty relating assessment results to narrative criteria in water-quality standards. An alternative to selecting index thresholds arbitrarily is to include the Biological Condition Gradient (BCG) in the development of the assessment method. The BCG describes tiers of biological community condition to help identify and communicate the position of a water body along a gradient of water quality ranging from natural to degraded. Although originally developed for fish and macroinvertebrate communities of streams and rivers, the BCG is easily adapted to other habitats and taxonomic groups. We developed a discriminant analysis model with stream algal data to predict attainment of tiered aquatic-life uses in Maine's water-quality standards. We modified the BCG framework for Maine stream algae, related the BCG tiers to Maine's tiered aquatic-life uses, and identified appropriate algal metrics for describing BCG tiers. Using a modified Delphi method, 5 aquatic biologists independently evaluated algal community metrics for 230 samples from streams and rivers across the state and assigned a BCG tier (1–6) and Maine water quality class (AA/A, B, C, nonattainment of any class) to each sample. We used minimally disturbed reference sites to approximate natural conditions (Tier 1). Biologist class assignments were unanimous for 53% of samples, and 42% of samples differed by 1 class. The biologists debated and developed consensus class assignments. A linear discriminant model built to replicate a priori class assignments correctly classified 95% of 150 samples in the model training set and 91% of 80 samples in the model validation set. Locally derived metrics based on BCG taxon tolerance groupings (e.g., sensitive, intermediate, tolerant) were more effective than were metrics developed in other regions. Adding the algal discriminant model to Maine's existing macroinvertebrate discriminant

  11. Fundamentalists, Priests, Martyrs and Converts: A Typology of First Tier Management in Further Education

    Science.gov (United States)

    Page, Damien

    2011-01-01

    This article presents findings from a study of first tier managers in English Further Education colleges, a role critically neglected within the literature, despite its centrality to organisational effectiveness and learner success. The role was found to be diverse, contested and elastic and while first tier managers were found to be highly…

  12. How two-tier boards can be more effective

    NARCIS (Netherlands)

    dr. Stefan Peij; Pieter-Jan Bezemer; Laura de Kruijs; Gregory Maassen

    2014-01-01

    Purpose – This study seeks to explore how non-executive directors address governance problems on Dutch two-tier boards. Within this board model, challenges might be particularly difficult to address due to the formal separation of management boards' decision-management from supervisory boards'

  13. 76 FR 79221 - Penske Logistics, LLC, Customer Service Department General Motors and Tier Finished Goods...

    Science.gov (United States)

    2011-12-21

    ..., Customer Service Department General Motors and Tier Finished Goods/Finished Goods Division; a Subsidiary of... Manpower El Paso, TX; Amended Certification Regarding Eligibility To Apply for Worker Adjustment Assistance... should read Penske Logistics, LLC, Customer Service Department, General Motors and Tier Finished Goods...

  14. Performance analysis of a handoff scheme for two-tier cellular CDMA networks

    Directory of Open Access Journals (Sweden)

    Ahmed Hamad

    2011-07-01

    Full Text Available A two-tier model is used in cellular networks to improve the Quality of Service (QoS, namely to reduce the blocking probability of new calls and the forced termination probability of ongoing calls. One tier, the microcells, is used for slow or stationary users, and the other, the macrocell, is used for high speed users. In Code-Division Multiple-Access (CDMA cellular systems, soft handoffs are supported, which provides ways for further QoS improvement. In this paper, we introduce such a way; namely, a channel borrowing scheme used in conjunction with a First-In-First-Out (FIFO queue in the macrocell tier. A multidimensional Markov chain to model the resulting system is established, and an iterative technique to find the steady-state probability distribution is utilized. This distribution is then used to find the performance measures of interest: new call blocking probability, and forced termination probability.

  15. Mechanical seal having a double-tier mating ring

    Science.gov (United States)

    Khonsari, Michael M.; Somanchi, Anoop K.

    2005-09-13

    An apparatus and method to enhance the overall performance of mechanical seals in one of the following ways: by reducing seal face wear, by reducing the contact surface temperature, or by increasing the life span of mechanical seals. The apparatus is a mechanical seal (e.g., single mechanical seals, double mechanical seals, tandem mechanical seals, bellows, pusher mechanical seals, and all types of rotating and reciprocating machines) comprising a rotating ring and a double-tier mating ring. In a preferred embodiment, the double-tier mating ring comprises a first and a second stationary ring that together form an agitation-inducing, guided flow channel to allow for the removal of heat generated at the seal face of the mating ring by channeling a coolant entering the mating ring to a position adjacent to and in close proximity with the interior surface area of the seal face of the mating ring.

  16. Infrastructure needed for success - An OEM/NSP designer's perspective

    International Nuclear Information System (INIS)

    Yee, F.

    2014-01-01

    A healthy nuclear industry requires the successful interaction of many participants. This presentation outlines the key infrastructure entities that are needed by the OEM for success: Healthy Supply Chain, Strong Media and Public Support, Engaged Universities, Practical Set of Codes and Standards, Effective Regulator, Productive Engineering Tools and Analysis Codes, Proactive Government & External Relations, Capable Trades and Construction Work Force, Strong R&D Facilities and Researchers, Profitable Utilities and Strong Owners Group. The presentation describes the inputs (from the OEM to the external entities) and the desired outcomes (from the infrastructure entities). (author)

  17. Gas storage facilities. Investigation of their social value

    International Nuclear Information System (INIS)

    1997-02-01

    The socio-economic factors resulting from location of gas storage facilities are evaluated. Various alternatives to the existing projects are estimated, for instance 11 new pipelines, in some cases combined with new production capacity, LNG facilities, differentiated tariffs, reconstruction of decentralized heat/power plants etc. Theoretical considerations and models, among others involving gas storage abroad, are presented. Seasonal storage, emergency storage, storage controlled by economic optimization (profitable purchases, sales at highest market) are described for various types of facilities, like aquifers, caverns and LNG-stores. Natural gas supplies in Europe, infrastructure and resources are compared to the Danish conditions. Sensitivity of the Danish heating market for natural gas consumption is investigated. Reduction in energy use for space heating by 2005 will change the needs of storage of 740 Mm 3 gas to 650 Mm 3 . Extra consumption by the decentralized power/heat plants is not accounted for in this estimation. Dynamic models of the future gas consumption are based on the EU 'European Energy 2020'. (EG)

  18. 76 FR 18259 - Announcement Regarding Delaware Triggering “on” Tier Four of Emergency Unemployment Compensation...

    Science.gov (United States)

    2011-04-01

    ... Triggering ``on'' Tier Four of Emergency Unemployment Compensation 2008 (EUC08) AGENCY: Employment and...'' Tier Four of Emergency Unemployment Compensation 2008 (EUC08). Public Law 111-312 extended provisions... the EUC08 program for qualified unemployed workers claiming benefits in high unemployment states. The...

  19. Collusion-Aware Privacy-Preserving Range Query in Tiered Wireless Sensor Networks†

    Science.gov (United States)

    Zhang, Xiaoying; Dong, Lei; Peng, Hui; Chen, Hong; Zhao, Suyun; Li, Cuiping

    2014-01-01

    Wireless sensor networks (WSNs) are indispensable building blocks for the Internet of Things (IoT). With the development of WSNs, privacy issues have drawn more attention. Existing work on the privacy-preserving range query mainly focuses on privacy preservation and integrity verification in two-tiered WSNs in the case of compromised master nodes, but neglects the damage of node collusion. In this paper, we propose a series of collusion-aware privacy-preserving range query protocols in two-tiered WSNs. To the best of our knowledge, this paper is the first to consider collusion attacks for a range query in tiered WSNs while fulfilling the preservation of privacy and integrity. To preserve the privacy of data and queries, we propose a novel encoding scheme to conceal sensitive information. To preserve the integrity of the results, we present a verification scheme using the correlation among data. In addition, two schemes are further presented to improve result accuracy and reduce communication cost. Finally, theoretical analysis and experimental results confirm the efficiency, accuracy and privacy of our proposals. PMID:25615731

  20. Collusion-aware privacy-preserving range query in tiered wireless sensor networks.

    Science.gov (United States)

    Zhang, Xiaoying; Dong, Lei; Peng, Hui; Chen, Hong; Zhao, Suyun; Li, Cuiping

    2014-12-11

    Wireless sensor networks (WSNs) are indispensable building blocks for the Internet of Things (IoT). With the development of WSNs, privacy issues have drawn more attention. Existing work on the privacy-preserving range query mainly focuses on privacy preservation and integrity verification in two-tiered WSNs in the case of compromisedmaster nodes, but neglects the damage of node collusion. In this paper, we propose a series of collusion-aware privacy-preserving range query protocols in two-tiered WSNs. To the best of our knowledge, this paper is the first to consider collusion attacks for a range query in tiered WSNs while fulfilling the preservation of privacy and integrity. To preserve the privacy of data and queries, we propose a novel encoding scheme to conceal sensitive information. To preserve the integrity of the results, we present a verification scheme using the correlation among data. In addition, two schemes are further presented to improve result accuracy and reduce communication cost. Finally, theoretical analysis and experimental results confirm the efficiency, accuracy and privacy of our proposals.

  1. Collusion-Aware Privacy-Preserving Range Query in Tiered Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xiaoying Zhang

    2014-12-01

    Full Text Available Wireless sensor networks (WSNs are indispensable building blocks for the Internet of Things (IoT. With the development of WSNs, privacy issues have drawn more attention. Existing work on the privacy-preserving range query mainly focuses on privacy preservation and integrity verification in two-tiered WSNs in the case of compromisedmaster nodes, but neglects the damage of node collusion. In this paper, we propose a series of collusion-aware privacy-preserving range query protocols in two-tiered WSNs. To the best of our knowledge, this paper is the first to consider collusion attacks for a range query in tiered WSNs while fulfilling the preservation of privacy and integrity. To preserve the privacy of data and queries, we propose a novel encoding scheme to conceal sensitive information. To preserve the integrity of the results, we present a verification scheme using the correlation among data. In addition, two schemes are further presented to improve result accuracy and reduce communication cost. Finally, theoretical analysis and experimental results confirm the efficiency, accuracy and privacy of our proposals.

  2. Security infrastructure for dynamically provisioned cloud infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; de Laat, C.; Lopez, D.R.; Morales, A.; García-Espín, J.A.; Pearson, S.; Yee, G.

    2013-01-01

    This chapter discusses conceptual issues, basic requirements and practical suggestions for designing dynamically configured security infrastructure provisioned on demand as part of the cloud-based infrastructure. This chapter describes general use cases for provisioning cloud infrastructure services

  3. Top-tier requirements for KNGR

    International Nuclear Information System (INIS)

    Sung-Jae, Ch.; Kwangho, L.; Dong Wook, J.

    1996-01-01

    In 1992, Korea Electric Power Corporation (KEPCO) has launched the next generation reactor project to develop the standard design of an advanced pressurized water reactor by 2000. This advanced reactor aims to have the sufficient capability to be a safe, environmentally sound and economical energy source for 2000's in Korea. In conjunction with the project development, the program phase I is studied and it is in the Korean Next Generation Reactor (KNGR) first phase project that the requirements of this specification called ''Top-tier'' have been established. These functional requirements are of the first importance for the design, construction and operation of a nuclear power plant. These requirements are divided into safety requirements, serious accidents control, design base requirements, definition of the system characteristics, performance, construction feasibility, economical objectives, site parameters and design processes. The ''Top-tier'' requirements are concentrated on the improvement of the safety and reliability. Safety is one of the first priorities. In particular, the requirements for the design of the next reactors generation must include the capacity to control serious accidents because when an accident occurs, the protection degree is crucial. The KNGR requirements include the existing nuclear power plants competitiveness as well as those of the coal thermal plants. Moreover, when safety is reinforced, the economic competitiveness can be assured. At the present time, a subsequent specification for the KNGR considering the bases of the domestic technology and experimenting the running. (O.M.)

  4. Rhabdom evolution in butterflies: insights from the uniquely tiered and heterogeneous ommatidia of the Glacial Apollo butterfly, Parnassius glacialis.

    Science.gov (United States)

    Matsushita, Atsuko; Awata, Hiroko; Wakakuwa, Motohiro; Takemura, Shin-ya; Arikawa, Kentaro

    2012-09-07

    The eye of the Glacial Apollo butterfly, Parnassius glacialis, a 'living fossil' species of the family Papilionidae, contains three types of spectrally heterogeneous ommatidia. Electron microscopy reveals that the Apollo rhabdom is tiered. The distal tier is composed exclusively of photoreceptors expressing opsins of ultraviolet or blue-absorbing visual pigments, and the proximal tier consists of photoreceptors expressing opsins of green or red-absorbing visual pigments. This organization is unique because the distal tier of other known butterflies contains two green-sensitive photoreceptors, which probably function in improving spatial and/or motion vision. Interspecific comparison suggests that the Apollo rhabdom retains an ancestral tiered pattern with some modification to enhance its colour vision towards the long-wavelength region of the spectrum.

  5. CLIMB (the Cloud Infrastructure for Microbial Bioinformatics): an online resource for the medical microbiology community.

    Science.gov (United States)

    Connor, Thomas R; Loman, Nicholas J; Thompson, Simon; Smith, Andy; Southgate, Joel; Poplawski, Radoslaw; Bull, Matthew J; Richardson, Emily; Ismail, Matthew; Thompson, Simon Elwood-; Kitchen, Christine; Guest, Martyn; Bakke, Marius; Sheppard, Samuel K; Pallen, Mark J

    2016-09-01

    The increasing availability and decreasing cost of high-throughput sequencing has transformed academic medical microbiology, delivering an explosion in available genomes while also driving advances in bioinformatics. However, many microbiologists are unable to exploit the resulting large genomics datasets because they do not have access to relevant computational resources and to an appropriate bioinformatics infrastructure. Here, we present the Cloud Infrastructure for Microbial Bioinformatics (CLIMB) facility, a shared computing infrastructure that has been designed from the ground up to provide an environment where microbiologists can share and reuse methods and data.

  6. 9 CFR 3.141 - Terminal facilities.

    Science.gov (United States)

    2010-01-01

    ... Warmblooded Animals Other Than Dogs, Cats, Rabbits, Hamsters, Guinea Pigs, Nonhuman Primates, and Marine... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Terminal facilities. 3.141 Section 3.141 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE...

  7. 9 CFR 3.125 - Facilities, general.

    Science.gov (United States)

    2010-01-01

    ... Warmblooded Animals Other Than Dogs, Cats, Rabbits, Hamsters, Guinea Pigs, Nonhuman Primates, and Marine... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Facilities, general. 3.125 Section 3.125 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE...

  8. 9 CFR 3.127 - Facilities, outdoor.

    Science.gov (United States)

    2010-01-01

    ... Warmblooded Animals Other Than Dogs, Cats, Rabbits, Hamsters, Guinea Pigs, Nonhuman Primates, and Marine... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Facilities, outdoor. 3.127 Section 3.127 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE...

  9. Emplacement feasibility of a multi-tier, expanded capacity repository at Yucca Mountain, Nevada USA

    International Nuclear Information System (INIS)

    Apted, Michael; Kessler, John; Fairhurst, Charles

    2008-01-01

    A geological repository at Yucca Mountain has been proposed for the disposal of spent fuel from the US commercial reactors and other radioactive waste. A legislative capacity of 70,000 MTHM has been set by the Nuclear Waste Policy Act of 1982, including 63,000 MTHM of commercial spent nuclear fuel (CSNF), the projected amount of CSNF that will be produced by about 2014. Policy issues remain as to how to handle waste that is generated beyond 2014 from a growing nuclear industry in the US. The Electric Power Research Institute (EPRI) is independently evaluating the technical, rather than legislative, limit of CSNF that could be safely disposed at Yucca Mountain. Geological, thermal management, safety and cost factors have been recently evaluated by EPRI (2006; 2007) for grouped emplacement drifts and/or a multi-tier repository. EPRI's evaluation of emplacement feasibility for a multi-tier concept is described here. Expanded capacity concepts as envisioned for Yucca Mountain (EPRI, 2006; 2007) assume excavation of one or two additional levels of drifts parallel to or above and/or below the original drift excavations. For the latter multi-tier concept each 'tier' or 'level' would essentially replicate the original layer with a 30-m separation between tiers. This arrangement essentially doubles or triples the capacity of the repository for a two- or three-tier design, respectively. The main issues that affect the feasibility of expanded capacity design are; (i) ventilation requirements; (ii) radiation hazards; (iii) thermal and thermo-mechanical constraints. (i)Ventilation: The repository design involves waste packages mounted in close proximity to each other in 600-m long drifts that remain open and actively ventilated for at least 50-100 years. Analyses,conservatively assuming that all three repository levels operate simultaneously, indicate no technological obstacles in meeting ventilation requirements for sustained simultaneous operation ba sed on current industrial

  10. The effect of infrastructural challenges on food security in Ntambanana, KwaZulu-Natal, South Africa

    Directory of Open Access Journals (Sweden)

    Mosa Selepe

    2014-01-01

    Full Text Available Rural infrastructural inadequacies in South Africa are well documented, but their effects on local food security remain relatively unexplored. The present study investigated the effects of insufficient infrastructural services on food security issues at household and community level in the area of Ntambanana, which is characterised as a dry environment with few water reservoir facilities effective farming. Focus group discussions were held with existing groupings of men and women, and interviews were conducted with governmental officials and community members. A questionnaire then was used to confirm responses and test the reliability of information from the interviews. Our study found that there was poor infrastructure and inadequate support from relevant organisations; the roads were not in good condition limiting access to market facilities and other destinations and lack of an efficient and effective transportation system crippling the performance of small-scale farmers. Recommendations emerging from this study include the need for attention to be paid to address the fundamental deficiencies that hinder food security. Better infrastructure would enable rural areas to compete with the urban markets and to attract internal and external investors.

  11. Plan for 3-D full-scale earthquake testing facility

    International Nuclear Information System (INIS)

    Ohtani, K.

    2001-01-01

    Based on the lessons learnt from the Great Hanshin-Awaji Earthquake, National Research Institute for Earth Science and Disaster Prevention plan to construct the 3-D Full-Scale Earthquake Testing Facility. This will be the world's largest and strongest shaking table facility. This paper describes the outline of the project for this facility. This facility will be completed in early 2005. (author)

  12. Energy-Water Modeling and Impacts at Urban and Infrastructure Scales

    Science.gov (United States)

    Saleh, F.; Pullen, J. D.; Schoonen, M. A.; Gonzalez, J.; Bhatt, V.; Fellows, J. D.

    2017-12-01

    We converge multi-disciplinary, multi-sectoral modeling and data analysis tools on an urban watershed to examine the feedbacks of concentrated and connected infrastructure on the environment. Our focus area is the Lower Hudson River Basin (LHRB). The LHRB captures long-term and short- term energy/water stressors as it represents: 1) a coastal environment subject to sea level rise that is among the fastest in the East impacted by a wide array of various storms; 2) one of the steepest gradients in population density in the US, with Manhattan the most densely populated coastal county in the nation; 3) energy/water infrastructure serving the largest metropolitan area in the US; 4) a history of environmental impacts, ranging from heatwaves to hurricanes, that can be used to hindcast; and 5) a wealth of historic and real-time data, extensive monitoring facilities and existing specific sector models that can be leveraged. We detail two case studies on "water infrastructure and stressors", and "heatwaves and energy-water demands." The impact of a hypothetical failure of Oradell Dam (on the Hackensack River, a tributary of the Hudson River) coincident with a hurricane, and urban power demands under current and future heat waves are examined with high-resolution (meter to km scale) earth system models to illustrate energy water nexus issues where detailed predictions can shape response and mitigation strategies.

  13. A high performance hierarchical storage management system for the Canadian tier-1 centre at TRIUMF

    International Nuclear Information System (INIS)

    Deatrich, D C; Liu, S X; Tafirout, R

    2010-01-01

    We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.

  14. Dynamics three-tier hydraulic crane-manipulators

    OpenAIRE

    Lagerev I.A.; Lagerev A.V.

    2018-01-01

    The methods and generalized recommendations for modeling dynamic loading of load-bearing elements of steel structures of three-tier hydraulic cranes-manipulators are considered. Mathematical models have been developed to study the dynamics of moving elements of the crane-manipulator, the movement of the load-lifting machine on a stochastic uneven surface with a suspended load. The presented approaches can be used to calculate other types of jib cranes equipped with hydraulic drive.

  15. Communication costs in a multi-tiered MPSoC

    NARCIS (Netherlands)

    van de Burgwal, M.D.; Smit, Gerardus Johannes Maria

    2008-01-01

    The amount of digital processing required for phased array beamformers is very large. It requires many parallel processors, which can be organized in a multi-tiered structure. Communication costs differ for each of the stages in such an architecture. For example, communication costs from the antenna

  16. Home/community-based services: a two-tier approach.

    Science.gov (United States)

    Aponte, H J; Zarski, J J; Bixenstine, C; Cibik, P

    1991-07-01

    A two-tier model for work with high-risk families is presented. It combines multiple-family groups in the community with home-based family therapy for individual families. The ecostructural conceptual framework of the model is discussed, and its application is illustrated by a case vignette.

  17. The computing and data infrastructure to interconnect EEE stations

    Science.gov (United States)

    Noferini, F.; EEE Collaboration

    2016-07-01

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  18. The computing and data infrastructure to interconnect EEE stations

    Energy Technology Data Exchange (ETDEWEB)

    Noferini, F., E-mail: noferini@bo.infn.it [Museo Storico della Fisica e Centro Studi e Ricerche “Enrico Fermi”, Rome (Italy); INFN CNAF, Bologna (Italy)

    2016-07-11

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  19. The computing and data infrastructure to interconnect EEE stations

    International Nuclear Information System (INIS)

    Noferini, F.

    2016-01-01

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  20. Scaling up a CMS tier-3 site with campus resources and a 100 Gb/s network connection: what could go wrong?

    Science.gov (United States)

    Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Tovar, Benjamin; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2017-10-01

    The University of Notre Dame (ND) CMS group operates a modest-sized Tier-3 site suitable for local, final-stage analysis of CMS data. However, through the ND Center for Research Computing (CRC), Notre Dame researchers have opportunistic access to roughly 25k CPU cores of computing and a 100 Gb/s WAN network link. To understand the limits of what might be possible in this scenario, we undertook to use these resources for a wide range of CMS computing tasks from user analysis through large-scale Monte Carlo production (including both detector simulation and data reconstruction.) We will discuss the challenges inherent in effectively utilizing CRC resources for these tasks and the solutions deployed to overcome them.

  1. TANK 18 AND 19-F TIER 1A EQUIPMENT FILL MOCK UP TEST SUMMARY

    Energy Technology Data Exchange (ETDEWEB)

    Stefanko, D.; Langton, C.

    2011-11-04

    The United States Department of Energy (US DOE) has determined that Tanks 18-F and 19-F have met the F-Tank Farm (FTF) General Closure Plan Requirements and are ready to be permanently closed. The high-level waste (HLW) tanks have been isolated from FTF facilities. To complete operational closure they will be filled with grout for the purpose of: (1) physically stabilizing the tanks, (2) limiting/eliminating vertical pathways to residual waste, (3) discouraging future intrusion, and (4) providing an alkaline, chemical reducing environment within the closure boundary to control speciation and solubility of select radionuclides. Bulk waste removal and heel removal equipment remain in Tanks 18-F and 19-F. This equipment includes the Advance Design Mixer Pump (ADMP), transfer pumps, transfer jets, standard slurry mixer pumps, equipment-support masts, sampling masts, dip tube assemblies and robotic crawlers. The present Tank 18 and 19-F closure strategy is to grout the equipment in place and eliminate vertical pathways by filling voids in the equipment to vertical fast pathways and water infiltration. The mock-up tests described in this report were intended to address placement issues identified for grouting the equipment that will be left in Tank 18-F and Tank 19-F. The Tank 18-F and 19-F closure strategy document states that one of the Performance Assessment (PA) requirements for a closed tank is that equipment remaining in the tank be filled to the extent practical and that vertical flow paths 1 inch and larger be grouted. The specific objectives of the Tier 1A equipment grout mock-up testing include: (1) Identifying the most limiting equipment configurations with respect to internal void space filling; (2) Specifying and constructing initial test geometries and forms that represent scaled boundary conditions; (3) Identifying a target grout rheology for evaluation in the scaled mock-up configurations; (4) Scaling-up production of a grout mix with the target rheology

  2. Statistical exploration of dataset examining key indicators influencing housing and urban infrastructure investments in megacities

    Directory of Open Access Journals (Sweden)

    Adedeji O. Afolabi

    2018-06-01

    Full Text Available Lagos, by the UN standards, has attained the megacity status, with the attendant challenges of living up to that titanic position; regrettably it struggles with its present stock of housing and infrastructural facilities to match its new status. Based on a survey of construction professionals’ perception residing within the state, a questionnaire instrument was used to gather the dataset. The statistical exploration contains dataset on the state of housing and urban infrastructural deficit, key indicators spurring the investment by government to upturn the deficit and improvement mechanisms to tackle the infrastructural dearth. Descriptive statistics and inferential statistics were used to present the dataset. The dataset when analyzed can be useful for policy makers, local and international governments, world funding bodies, researchers and infrastructural investors. Keywords: Construction, Housing, Megacities, Population, Urban infrastructures

  3. Required performance to the concrete structure of the accelerator facilities

    International Nuclear Information System (INIS)

    Irie, Masaaki; Yoshioka, Masakazu; Miyahara, Masanobu

    2006-01-01

    As for the accelerator facility, there is many a thing which is constructed as underground concrete structure from viewpoint such as cover of radiation and stability of the structure. Required performance to the concrete structure of the accelerator facility is the same as the general social infrastructure, but it has been possessed the feature where target performance differs largely. As for the body sentence, expressing the difference of the performance which is required from the concrete structure of the social infrastructure and the accelerator facility, construction management of the concrete structure which it plans from order of the accelerator engineering works facility, reaches to the design, supervision and operation it is something which expresses the method of thinking. In addition, in the future of material structural analysis of the concrete which uses the neutron accelerator concerning view it showed. (author)

  4. Low-complexity co-tier interference reduction scheme in open-access overlaid cellular networks

    KAUST Repository

    Radaydeh, Redha Mahmoud Mesleh

    2011-12-01

    This paper addresses the effect of co-tier interference on the performance of multiuser overlaid cellular networks that share the same available resources. It assumed that each macrocell contains a number of self-configurable and randomly located femtocells that employ the open-access control strategy to reduce the effect of cross-tier interference. It is also assumed that the desired user equipment (UE) can access only one of the available channels, maintains simple decoding circuitry with single receive antenna, and has limited knowledge of the instantaneous channel state information (CSI) due to resource limitation. To mitigate the effect of co-tier interference in the absence of the CSI of the desired UE, a low-complexity switched-based scheme for single channel selection based on the predicted interference levels associated with available channels is proposed for the case of over-loaded channels. Through the analysis, new general formulation for the statistics of the resulting instantaneous interference power and some performance measures are presented. The effect of the switching threshold on the efficiency and performance of the proposed scheme is studied. Numerical and simulation results to clarify the usefulness of the proposed scheme in reducing the impact of co-tier interference are also provided. © 2011 IEEE.

  5. The Development and Validation of a Three-Tier Diagnostic Test Measuring Pre-Service Elementary Education and Secondary Science Teachers' Understanding of the Water Cycle

    Science.gov (United States)

    Schaffer, Dannah Lynn

    2013-01-01

    The main goal of this research study was to develop and validate a three-tier diagnostic test to determine pre-service teachers' (PSTs) conceptual knowledge of the water cycle. For a three-tier diagnostic test, the first tier assesses content knowledge; in the second tier, a reason is selected for the content answer; and the third tier allows…

  6. United States Domestic Research Reactor Infrastructure TRIGA Reactor Fuel Support

    International Nuclear Information System (INIS)

    Morrell, Douglas

    2011-01-01

    The United State Domestic Research Reactor Infrastructure Program at the Idaho National Laboratory manages and provides project management, technical, quality engineering, quality inspection and nuclear material support for the United States Department of Energy sponsored University Reactor Fuels Program. This program provides fresh, unirradiated nuclear fuel to Domestic University Research Reactor Facilities and is responsible for the return of the DOE-owned, irradiated nuclear fuel over the life of the program. This presentation will introduce the program management team, the universities supported by the program, the status of the program and focus on the return process of irradiated nuclear fuel for long term storage at DOE managed receipt facilities. It will include lessons learned from research reactor facilities that have successfully shipped spent fuel elements to DOE receipt facilities.

  7. 9 CFR 3.52 - Facilities, outdoor.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Facilities, outdoor. 3.52 Section 3.52 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE ANIMAL... outdoors when the atmospheric temperature falls below 40 °F. (d) Protection from predators. Outdoor housing...

  8. Development and Evaluation of a Three-Tier Diagnostic Test to Assess Undergraduate Primary Teachers' Understanding of Ecological Footprint

    Science.gov (United States)

    Liampa, Vasiliki; Malandrakis, George N.; Papadopoulou, Penelope; Pnevmatikos, Dimitrios

    2017-08-01

    This study focused on the development and validation of a three-tier multiple-choice diagnostic instrument about the ecological footprint. Each question in the three-tier test comprised by; (a) the content tier, assessing content knowledge; (b) the reason tier, assessing explanatory knowledge; and (c) the confidence tier that differentiates lack of knowledge from misconception through the use of a certainty response index. Based on the literature, the propositional knowledge statements and the identified misconceptions of 97 student-teachers, a first version of the test was developed and subsequently administered to another group of 219 student-teachers from Primary and Early Childhood Education Departments. Due to the complexity of the ecological footprint concept, and that it is a newly introduced concept, unknown to the public, both groups have been previously exposed to relevant instruction. Experts in the field established face and content validity. The reliability, in terms of Cronbach's alpha, was found adequate (α = 0.839), and the test-retest reliability, as indicated by Pearson r, was also satisfactory (0.554). The mean performance of the students was 56.24% in total score, 59.75% in content tiers and 48.05% in reason tiers. A variety of concepts about the ecological footprint were also observed. The test can help educators to understand the alternative views that students hold about the ecological footprint concept and assist them in developing the concept through appropriately designed teaching methods and materials.

  9. Time horizon for AFV emission savings under Tier 2

    International Nuclear Information System (INIS)

    Saricks, C. L.

    2000-01-01

    Implementation of the Federal Tier 2 vehicular emission standards according to the schedule presented in the December, 1999 Final Rule will result in substantial reductions of NMHC, CO, NO x , and fine particle emissions from motor vehicles. Currently, when compared to Tier 1 and even NLEV certification requirements, the emissions performance of automobiles and light-duty trucks powered by non-petroleum (especially, gaseous) fuels (i.e., vehicles collectively termed AFVs) enjoy measurable advantage over their gasoline- and diesel-fueled counterparts over the full Federal Test Procedure and, especially, in Bag 1 (cold start). For the lighter end of these vehicle classes, this advantage may disappear shortly after 2004 under the new standards, but should continue for a longer period (perhaps beyond 2008) for the heavier end as well as for heavy-duty vehicles relative to diesel-fueled counterparts. Because of the continuing commitment of the U.S. Department of Energy's Clean Cities coalitions to the acquisition and operation of AFVs of many types and size classes, it is important for them to know in which classes their acquisitions will remain clear relative to the petroleum-fueled counterparts they might otherwise procure. This paper provides an approximate timeline for and expected magnitude of such savings, assuming that full implementation of the Tier 2 standards covering both vehicular emissions and fuel sulfur limits proceeds on schedule. The pollutants of interest are primary ozone precursors and fine particulate matter from fuel combustion

  10. HCP, grid and data infrastructures for astrophysics: an integrated view

    International Nuclear Information System (INIS)

    Pasian, F.

    2009-01-01

    Also in the case of astrophysics, the capability of performing Big Science requires the availability of large Hcp facilities. But computational resources alone are far from being enough for the community: as a matter of fact, the whole set of e-infrastructures (network, computing nodes, data repositories, applications) need to work in an inter operable way. This implies the development of common (or at least compatible) user interfaces to computing resources, transparent access to observations and numerical simulations through the Virtual Observatory, integrated data processing pipelines, data mining and semantic web applications. Achieving this inter operability goal is a must to build a real Knowledge Infrastructure in the astrophysical domain.

  11. Cloud computing can simplify HIT infrastructure management.

    Science.gov (United States)

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  12. National Ignition Facility subsystem design requirements NIF site improvements SSDR 1.2.1

    International Nuclear Information System (INIS)

    Kempel, P.; Hands, J.

    1996-01-01

    This Subsystem Design Requirements (SSDR) document establishes the performance, design, and verification requirements associated with the NIF Project Site at Lawrence Livermore National Laboratory (LLNL) at Livermore, California. It identifies generic design conditions for all NIF Project facilities, including siting requirements associated with natural phenomena, and contains specific requirements for furnishing site-related infrastructure utilities and services to the NIF Project conventional facilities and experimental hardware systems. Three candidate sites were identified as potential locations for the NIF Project. However, LLNL has been identified by DOE as the preferred site because of closely related laser experimentation underway at LLNL, the ability to use existing interrelated infrastructure, and other reasons. Selection of a site other than LLNL will entail the acquisition of site improvements and infrastructure additional to those described in this document. This SSDR addresses only the improvements associated with the NIF Project site located at LLNL, including new work and relocation or demolition of existing facilities that interfere with the construction of new facilities. If the Record of Decision for the PEIS on Stockpile Stewardship and Management were to select another site, this SSDR would be revised to reflect the characteristics of the selected site. Other facilities and infrastructure needed to support operation of the NIF, such as those listed below, are existing and available at the LLNL site, and are not included in this SSDR. Office Building. Target Receiving and Inspection. General Assembly Building. Electro- Mechanical Shop. Warehousing and General Storage. Shipping and Receiving. General Stores. Medical Facilities. Cafeteria services. Service Station and Garage. Fire Station. Security and Badging Services

  13. 76 FR 48904 - Announcement Regarding the Virgin Islands Triggering “on” Tier Three of Emergency Unemployment...

    Science.gov (United States)

    2011-08-09

    ... Islands Triggering ``on'' Tier Three of Emergency Unemployment Compensation 2008 (EUC08). AGENCY... Islands triggering ``on'' Tier Three of Emergency Unemployment Compensation 2008 (EUC08). Public law 111... unemployment states. The Department of Labor produces a trigger notice indicating which states qualify for...

  14. 76 FR 14102 - Announcement Regarding the Virgin Islands Triggering “Off” Tier Three of Emergency Unemployment...

    Science.gov (United States)

    2011-03-15

    ... Islands Triggering ``Off'' Tier Three of Emergency Unemployment Compensation 2008 (EUC08) AGENCY... Islands triggering ``off'' Tier Three of Emergency Unemployment Compensation 2008 (EUC08). Public Law 111... unemployment states. The Department of Labor produces a trigger notice indicating which states qualify for...

  15. The role of gas infrastructure in promoting UK energy security

    International Nuclear Information System (INIS)

    Skea, Jim; Chaudry, Modassar; Wang Xinxin

    2012-01-01

    This paper considers whether commercially driven investment in gas infrastructure is sufficient to provide security of gas supply or whether strategic investment encouraged by government is desirable. The paper focuses on the UK in the wider EU context. A modelling analysis of the impact of disruptions, lasting from days to months, at the UK's largest piece of gas infrastructure is at the heart of the paper. The disruptions are hypothesised to take place in the mid-2020s, after the current wave of commercial investments in storage and LNG import facilities has worked its way through. The paper also analyses the current role of gas in energy markets, reviews past disruptions to gas supplies, highlights current patterns of commercial investment in gas infrastructure in the UK and assesses the implications of recent EU legislation on security of gas supply. The paper concludes with an analysis of the desirability of strategic investment in gas infrastructure. - Highlights: ► We examine the impact of disruptions to gas supplies on UK energy markets. ► The policy implications of the EU regulation on gas security are discussed. ► We investigate the role of gas infrastructure investment in mitigating gas shocks. ► The policy case for strategic investment in gas storage is assessed.

  16. 20 CFR 228.40 - Cost of living increase applicable to the tier I annuity component.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Cost of living increase applicable to the... § 228.40 Cost of living increase applicable to the tier I annuity component. The tier I annuity... the Federal Register annually. The cost-of-living increase is payable beginning with the benefit for...

  17. INFRASTRUCTURE

    CERN Document Server

    A. Gaddi

    2011-01-01

    During the last winter technical stop, a number of corrective maintenance activities and infrastructure consolidation work-packages were completed. On the surface, the site cooling facility has passed the annual maintenance process that includes the cleaning of the two evaporative cooling towers, the maintenance of the chiller units and the safety checks on the software controls. In parallel, CMS teams, reinforced by PH-DT group personnel, have worked to shield the cooling gauges for TOTEM and CASTOR against the magnetic stray field in the CMS Forward region, to add labels to almost all the valves underground and to clean all the filters in UXC55, USC55 and SCX5. Following the insertion of TOTEM T1 detector, the cooling circuit has been branched off and commissioned. The demineraliser cartridges have been replaced as well, as they were shown to be almost saturated. New instrumentation has been installed in the SCX5 PC farm cooling and ventilation network, in order to monitor the performance of the HVAC system...

  18. Velocity-Aware Handover Management in Two-Tier Cellular Networks

    KAUST Repository

    Arshad, Rabe; Elsawy, Hesham; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2017-01-01

    by network densification. Hence, user mobility imposes a nontrivial challenge to harvest capacity gains via network densification. In this paper, we propose a velocity-aware HO management scheme for two-tier downlink cellular network to mitigate the HO effect

  19. MOEMS industrial infrastructure

    Science.gov (United States)

    van Heeren, Henne; Paschalidou, Lia

    2004-08-01

    Forecasters and analysts predict the market size for microsystems and microtechnologies to be in the order of 68 billion by the year 2005 (NEXUS Market Study 2002). In essence, the market potential is likely to double in size from its 38 billion status in 2002. According to InStat/MDR the market for MOEMS (Micro Optical Electro Mechanical Systems) in optical communication will be over $1.8 billion in 2006 and WTC states that the market for non telecom MOEMS will be even larger. Underpinning this staggering growth will be an infrastructure of design houses, foundries, package/assembly providers and equipment suppliers to cater for the demand in design, prototyping, and (mass-) production. This infrastructure is needed to provide an efficient route to commercialisation. Foundries, which provide the infrastructure to prototype, fabricate and mass-produce the designs emanating from the design houses and other companies. The reason for the customers to rely on foundries can be diverse: ranging from pure economical reasons (investments, cost-price) to technical (availability of required technology). The desire to have a second source of supply can also be a reason for outsourcing. Foundries aim to achieve economies of scale by combining several customer orders into volume production. Volumes are necessary, not only to achieve the required competitive cost prices, but also to attain the necessary technical competence level. Some products that serve very large markets can reach such high production volumes that they are able to sustain dedicated factories. In such cases, captive supply is possible, although outsourcing is still an option, as can be seen in the magnetic head markets, where captive and non-captive suppliers operate alongside each other. The most striking examples are: inkjet heads (>435 million heads per year) and magnetic heads (>1.5 billion heads per year). Also pressure sensor and accelerometer producers can afford their own facilities to produce the

  20. Adequate & Equitable U.S. PK-12 Infrastructure: Priority Actions for Systemic Reform. A Report from the Planning for PK-12 School Infrastructure National Initiative

    Science.gov (United States)

    Filardo, Mary; Vincent, Jeffrey M.

    2017-01-01

    To formulate a "systems-based" plan to address the PK-12 infrastructure crisis, in 2016, the 21st Century School Fund (21CSF) and the University of California-Berkeley's Center for Cities + Schools (CC+S), in partnership with the National Council on School Facilities and the Center for Green Schools at the U.S. Green Building Council,…

  1. CMS distributed computing workflow experience

    Science.gov (United States)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D.; Prosper, Harrison B.; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao, Junhui; Pin, Arnaud; Schul, Nicolas; De Lentdecker, Gilles; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey; Barge, Derek; Lahiff, Andrew

    2011-12-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  2. CMS distributed computing workflow experience

    International Nuclear Information System (INIS)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D; Prosper, Harrison B; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao Junhui; Pin, Arnaud; Schul, Nicolas; Lentdecker, Gilles De; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey

    2011-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  3. PENGGUNAAN KONEKSI CORBA DENGAN PEMROGRAMAN MIDAS MULTI-TIER APPLICATION DALAM SISTEM RESERVASI HOTEL

    Directory of Open Access Journals (Sweden)

    Irwan Kristanto Julistiono

    2001-01-01

    Full Text Available This paper is made from a multi-tier system using corba technology for hotel reservation program for web browser and also client program. Client software is connected to application server with Corba Connection and client and application server connect to SQL server 7.0. via ODBC. The are 2 types of client: web client and delphi client. In making web browser client application, we use delphi activex from technology, in where in this system made like making the regular form, but it has shortage in integration with html language. Multi-pier application using corba system generally has another profit beside it could be developed, this system also stake with multi system database server, multi middle servers and multi client in which with these things all the system can system can be integrated. The weakness of this system is the complicated corba system, so it will be difficult to understand, while for multi-tier it self need a particular procedure to determine which server chossed by the client. Abstract in Bahasa Indonesia : Pada makalah ini dibuat suatu sistem multi-tier yang menggunakan teknologi CORBA untuk program reservasi hotel baik dengan web browser maupun program client. Perangkat lunak yang dipakai sebagai database server adalah SQL server 7.0. Program Client Delphi melalui Corba Connection akan dihubungkan ke Aplikasi server. Dan melalui ODBC Aplikasi Server akan dihubungkan ke SQL Server 7.0. Ada dua buah aplikasi client yaitu yang menggunakan lokal network dan yang menggunakan global network/web browser. Pada pembuatan aplikasi client untuk web browser. Digunakan teknologi activex form pada delphi dimana sistem ini dibuat seperti membuat form biasa, hanya saja memiliki kekurangan pada integrasi dengan bahasa html. Penggunaan sistem multi-tier dengan Corba ini secara umum memiliki keuntungan selain dapat dikembangkan lebih lanjut juga sistem ini dirancang dengan sistem multi database server, multi midle server, dan multi client dimana

  4. Impacts of facility size and location decisions on ethanol production cost

    International Nuclear Information System (INIS)

    Kocoloski, Matt; Michael Griffin, W.; Scott Matthews, H.

    2011-01-01

    Cellulosic ethanol has been identified as a promising alternative to fossil fuels to provide energy for the transportation sector. One of the obstacles cellulosic ethanol must overcome in order to contribute to transportation energy demand is the infrastructure required to produce and distribute the fuel. Given a nascent cellulosic ethanol industry, locating cellulosic ethanol refineries and creating the accompanying infrastructure is essentially a greenfield problem that may benefit greatly from quantitative analysis. This study models cellulosic ethanol infrastructure investment using a mixed integer program (MIP) that locates ethanol refineries and connects these refineries to the biomass supplies and ethanol demands in a way that minimizes the total cost. For the single- and multi-state regions examined in this study, larger facilities can decrease ethanol costs by $0.20-0.30 per gallon, and placing these facilities in locations that minimize feedstock and product transportation costs can decrease ethanol costs by up to $0.25 per gallon compared to uninformed placement that could result from influences such as local subsidies to encourage economic development. To best benefit society, policies should allow for incentives that encourage these low-cost production scenarios and avoid politically motivated siting of plants. - Research highlights: → Mixed-integer programming can be used to model ethanol infrastructure investment. → Large cellulosic ethanol facilities can decrease production cost by $0.20/gallon. → Optimized facility placement can save $0.25/gallon.

  5. Integral Monitored Retrievable Storage (MRS) Facility conceptual design report

    International Nuclear Information System (INIS)

    1985-09-01

    The Basis for Design established the functional requirements and design criteria for an Integral Monitored Retrievable Storage (MRS) facility. The MRS Facility design, described in this report, is based on those requirements and includes all infrastructure, facilities, and equipment required to routinely receive, unload, prepare for storage, and store spent fuel (SF), high-level waste (HLW), and transuranic waste (TRU), and to decontaminate and return shipping casks received by both rail and truck. The facility is complete with all supporting facilities to make the MRS Facility a self-sufficient installation

  6. Aftertreatment in a pre-turbocharger position. Size and fuel consumption advantage for Tier 4

    Energy Technology Data Exchange (ETDEWEB)

    Bruestle, Claus [Emitec, Inc., Rochester Hills, MI (United States); Tomazic, Dean; Franke, Michael [FEV, Inc., Auburn Hills, MI (United States)

    2013-05-15

    As the 2014 implementation of EPA Tier 4 fast approaches in the US A, manufacturers of large bore diesel engines face a dilemma. The stringent limits set by Tier 4 legislation require large, heavy and expensive emissions control systems but severe constraints on installation space, weight and cost exist for these systems. A viable solution is to place catalysts and filters upstream of the turbocharger. (orig.)

  7. Design of multi-tiered database application based on CORBA component in SDUV-FEL system

    International Nuclear Information System (INIS)

    Sun Xiaoying; Shen Liren; Dai Zhimin

    2004-01-01

    The drawback of usual two-tiered database architecture was analyzed and the Shanghai Deep Ultraviolet-Free Electron Laser database system under development was discussed. A project for realizing the multi-tiered database architecture based on common object request broker architecture (CORBA) component and middleware model constructed by C++ was presented. A magnet database was given to exhibit the design of the CORBA component. (authors)

  8. Examining the Efficacy of a Tier 2 Kindergarten Mathematics Intervention.

    Science.gov (United States)

    Clarke, Ben; Doabler, Christian T; Smolkowski, Keith; Baker, Scott K; Fien, Hank; Strand Cary, Mari

    2016-01-01

    This study examined the efficacy of a Tier 2 kindergarten mathematics intervention program, ROOTS, focused on developing whole number understanding for students at risk in mathematics. A total of 29 classrooms were randomly assigned to treatment (ROOTS) or control (standard district practices) conditions. Measures of mathematics achievement were collected at pretest and posttest. Treatment and control students did not differ on mathematics assessments at pretest. Gain scores of at-risk intervention students were significantly greater than those of control peers, and the gains of at-risk treatment students were greater than the gains of peers not at risk, effectively reducing the achievement gap. Implications for Tier 2 mathematics instruction in a response to intervention (RtI) model are discussed. © Hammill Institute on Disabilities 2014.

  9. Gas storage facilities. Investigation of their social value. Supplement

    International Nuclear Information System (INIS)

    1997-02-01

    The socio-economic factors resulting from location of gas storage facilities are evaluated. Various alternatives to the existing projects are estimated, for instance 11 new pipelines, in some cases combined with new production capacity, LNG facilities, differentiated tariffs, reconstruction of decentralized heat/power plants etc. Theoretical considerations and models, among others involving gas storage abroad, are presented. Seasonal storage, emergency storage, storage controlled by economic optimization (profitable purchases, sales at highest market) are described for various types of facilities, like aquifers, caverns and LNG-stores. Natural gas supplies in Europe, infrastructure and resources are compared to the Danish conditions. Sensitivity of the Danish heating market for natural gas consumption is investigated. Reduction in energy use for space heating by 2005 will change the needs of storage of 740 Mm 3 gas to 650 Mm 3 . Extra consumption by the decentralized power/heat plants is not accounted for in this estimation. Dynamic models of the future gas consumption are based on the EU 'European Energy 2020'. (EG)

  10. Access to emergency and surgical care in sub-Saharan Africa: the infrastructure gap.

    Science.gov (United States)

    Hsia, Renee Y; Mbembati, Naboth A; Macfarlane, Sarah; Kruk, Margaret E

    2012-05-01

    The effort to increase access to emergency and surgical care in low-income countries has received global attention. While most of the literature on this issue focuses on workforce challenges, it is critical to recognize infrastructure gaps that hinder the ability of health systems to make emergency and surgical care a reality. This study reviews key barriers to the provision of emergency and surgical care in sub-Saharan Africa using aggregate data from the Service Provision Assessments and Demographic and Health Surveys of five countries: Ghana, Kenya, Rwanda, Tanzania and Uganda. For hospitals and health centres, competency was assessed in six areas: basic infrastructure, equipment, medicine storage, infection control, education and quality control. Percentage of compliant facilities in each country was calculated for each of the six areas to facilitate comparison of hospitals and health centres across the five countries. The percentage of hospitals with dependable running water and electricity ranged from 22% to 46%. In countries analysed, only 19-50% of hospitals had the ability to provide 24-hour emergency care. For storage of medication, only 18% to 41% of facilities had unexpired drugs and current inventories. Availability of supplies to control infection and safely dispose of hazardous waste was generally poor (less than 50%) across all facilities. As few as 14% of hospitals (and as high as 76%) among those surveyed had training and supervision in place. No surveyed hospital had enough infrastructure to follow minimum standards and practices that the World Health Organization has deemed essential for the provision of emergency and surgical care. The countries where these hospitals are located may be representative of other low-income countries in sub-Saharan Africa. Thus, the results suggest that increased attention to building up the infrastructure within struggling health systems is necessary for improvements in global access to medical care.

  11. Theoretical multi-tier trust framework for the geospatial domain

    CSIR Research Space (South Africa)

    Umuhoza, D

    2010-01-01

    Full Text Available chain or workflow from data acquisition to knowledge discovery. The author’s present work in progress of a theoretical multi-tier trust framework for processing chain from data acquisition to knowledge discovery in geospatial domain. Holistic trust...

  12. Development and implementation of web based infrastructure for problem management at UNPRI

    Science.gov (United States)

    WijayaDewantoro, Rico; Wardani, Sumita; Rudy; Surya Perdana Girsang, Batara; Dharma, Abdi

    2018-04-01

    Information technology drastically affects human way of thinking. It has entered every part of human life and also became one of the most significant contributors to make human life more manageable. Reporting a problem of facilities and infrastructure in Universitas Prima Indonesia was done manually where the complainant have to meet the responsible person directly and describe how the problem looks like. Then, the responsible person only solve the problem but have no good documentation on it like Five Ws and How. Moreover, the other issue is to avoid a person who is mischievous for giving false reports. In this paper, we applied a set of procedures called Universitas Prima Indonesia Problem Management System (UNPRI-PMS) which also integrated with academic information system. Implemetation of UNPRI-PMS affects all of the problems about facilities and infrastructure at Universitas Prima Indonesia can be solved more efficient, structured, and accurate.

  13. Leaders Growing Leaders: Designing a Tier-Based Leadership Program for Surgeons.

    Science.gov (United States)

    Torbeck, Laura; Rozycki, Grace; Dunnington, Gary

    2018-02-07

    Leadership has emerged as a crucial component of professional development for physicians in academic medicine. Most leadership skills can be learned and therefore best practices of delivering leadership development are in high demand. For practicing surgeons, specific strategies to teach leadership have been lacking. The purpose of this paper is to describe the structure of a tier-based leadership development program called Leaders Growing Leaders, to identify the major curricular components to each tier including measures and outcomes, and to share lessons learned for those who may want to begin a similar leadership development program. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  14. ECDS - a Swedish Research Infrastructure for the Open Sharing of Environment and Climate Data

    Directory of Open Access Journals (Sweden)

    T Klein

    2013-02-01

    Full Text Available Environment Climate Data Sweden (ECDS is a new Swedish research infrastructure, furthering the reuse of scientific data in the domains of environment and climate. ECDS consists of a technical infrastructure and a service organization, supporting the management, exchange, and re-use of scientific data. The technical components of ECDS include a portal and an underlying data catalogue with information on datasets. The datasets are described using a metadata profile compliant with international standards. The datasets accessible through ECDS can be hosted by universities, institutes, or research groups or at the new Swedish federated data storage facility Swestore of the Swedish National Infrastructure for Computing (SNIC.

  15. 3D WindScanner lidar measurements of wind and turbulence around wind turbines, buildings and bridges

    DEFF Research Database (Denmark)

    Mikkelsen, Torben Krogh; Sjöholm, Mikael; Angelou, Nikolas

    2017-01-01

    WindScanner is a distributed research infrastructure developed at DTU with the participation of a number of European countries. The research infrastructure consists of a mobile technically advanced facility for remote measurement of wind and turbulence in 3D. The WindScanners provide coordinated...... structures and of flow in urban environments. The mobile WindScanner facility enables 3D scanning of wind and turbulence fields in full scale within the atmospheric boundary layer at ranges from 10 meters to 5 (10) kilometers. Measurements of turbulent coherent structures are applied for investigation...

  16. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  17. Integrated design as an opportunity to develop green infrastructures within complex spatial questions

    NARCIS (Netherlands)

    Bartelse, G.; Kost, S.

    2012-01-01

    Landscape is a complex system of competitive spatial functions. This competition is especially readable in high dense urban areas between housing, industry, leisure facilities, transport and infrastructure, energy supply, flood protection, natural resources. Nevertheless, those conflicts are seldom

  18. Transporting Motivational Interviewing to School Settings to Improve the Engagement and Fidelity of Tier 2 Interventions

    Science.gov (United States)

    Frey, Andy J.; Lee, Jon; Small, Jason W.; Seeley, John R.; Walker, Hill M.; Feil, Edward G.

    2013-01-01

    The majority of Tier 2 interventions are facilitated by specialized instructional support personnel, such as a school psychologists, school social workers, school counselors, or behavior consultants. Many professionals struggle to involve parents and teachers in Tier 2 behavior interventions. However, attention to the motivational issues for…

  19. Wound center facility billing: A retrospective analysis of time, wound size, and acuity scoring for determining facility level of service.

    Science.gov (United States)

    Fife, Caroline E; Walker, David; Farrow, Wade; Otto, Gordon

    2007-01-01

    Outpatient wound center facility reimbursement for Medicare beneficiaries can be a challenge to determine and obtain. To compare methods of calculating facility service levels for outpatient wound centers and to demonstrate the advantages of an acuity-based billing system (one that incorporates components of facility work that is non-reimbursable by procedure codes and that represents an activity-based costing approach to medical billing), a retrospective study of 5,098 patient encounters contained in a wound care-specific electronic medical record database was conducted. Approximately 500 patient visits to the outpatient wound center of a Texas regional hospital between April 2003 and November 2004 were categorized by service level in documentation and facility management software. Visits previously billed using a time-based system were compared to the Centers for Medicare and Medicaid Services' proposed three-tiered wound size-based system. The time-based system also was compared to an acuity-based scoring system. The Pearson correlation coefficient between billed level of service by time and estimated level of service by acuity was 0.442 and the majority of follow-up visits were billed as Level 3 and above (on a time level of 1 to 5) , confirming that time is not a surrogate for actual work performed. Wound size also was found to be unrelated to service level (Pearson correlation = 0.017) and 97% of wound areas were billings than extremes; no other method produced this distribution. Hospital-based outpatient wound centers should develop, review, and refine acuity score-based models on which to determine billed level of service.

  20. A TSTT integrated FronTier code and its applications in computational fluid physics

    International Nuclear Information System (INIS)

    Fix, Brian; Glimm, James; Li Xiaolin; Li Yuanhua; Liu Xinfeng; Samulyak, Roman; Xu Zhiliang

    2005-01-01

    We introduce the FronTier-Lite software package and its adaptation to the TSTT geometry and mesh entity data interface. This package is extracted from the original front tracking code for general purpose scientific and engineering applications. The package contains a static interface library and a dynamic front propagation library. It can be used in research of different scientific problems. We demonstrate the application of FronTier in the simulations of fuel injection jet, the fusion pellet injection and fluid mixing problems