WorldWideScience

Sample records for grid job monitoring

  1. Real Time Monitor of Grid job executions

    International Nuclear Information System (INIS)

    Colling, D J; Martyniak, J; McGough, A S; Krenek, A; Sitera, J; Mulac, M; Dvorak, F

    2010-01-01

    In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.

  2. Application of rule-based data mining techniques to real time ATLAS Grid job monitoring data

    CERN Document Server

    Ahrens, R; The ATLAS collaboration; Kalinin, S; Maettig, P; Sandhoff, M; dos Santos, T; Volkmer, F

    2012-01-01

    The Job Execution Monitor (JEM) is a job-centric grid job monitoring software developed at the University of Wuppertal and integrated into the pilot-based “PanDA” job brokerage system leveraging physics analysis and Monte Carlo event production for the ATLAS experiment on the Worldwide LHC Computing Grid (WLCG). With JEM, job progress and grid worker node health can be supervised in real time by users, site admins and shift personnel. Imminent error conditions can be detected early and countermeasures can be initiated by the Job’s owner immideatly. Grid site admins can access aggregated data of all monitored jobs to infer the site status and to detect job and Grid worker node misbehaviour. Shifters can use the same aggregated data to quickly react to site error conditions and broken production tasks. In this work, the application of novel data-centric rule based methods and data-mining techniques to the real time monitoring data is discussed. The usage of such automatic inference techniques on monitorin...

  3. Performance of R-GMA based grid job monitoring system for CMS data production

    CERN Document Server

    Byrom, Robert; Fisher, Steve M; Grandi, Claudio; Hobson, Peter R; Kyberd, Paul; MacEvoy, Barry; Nebrensky, Jindrich Josef; Tallini, Hugh; Traylen, Stephen

    2004-01-01

    High Energy Physics experiments, such as the Compact Muon Solenoid (CMS) at the CERN laboratory in Geneva, have large-scale data processing requirements, with stored data accumulating at a rate of 1 Gbyte/s. This load comfortably exceeds any previous processing requirements and we believe it may be most efficiently satisfied through Grid computing. Management of large Monte Carlo productions (~3000 jobs) or data analyses and the quality assurance of the results requires careful monitoring and bookkeeping, and an important requirement when using the Grid is the ability to monitor transparently the large number of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. We have previously developed a system allowing us to test its performance under a heavy load while using few real Grid resources. We present the latest results on this system and comp...

  4. Scalability tests of R-GMA based Grid job monitoring system for CMS Monte Carlo data production

    CERN Document Server

    Bonacorsi, D; Field, L; Fisher, S; Grandi, C; Hobson, P R; Kyberd, P; MacEvoy, B; Nebrensky, J J; Tallini, H; Traylen, S

    2004-01-01

    High Energy Physics experiments such as CMS (Compact Muon Solenoid) at the Large Hadron Collider have unprecedented, large-scale data processing computing requirements, with data accumulating at around 1 Gbyte/s. The Grid distributed computing paradigm has been chosen as the solution to provide the requisite computing power. The demanding nature of CMS software and computing requirements, such as the production of large quantities of Monte Carlo simulated data, makes them an ideal test case for the Grid and a major driver for the development of Grid technologies. One important challenge when using the Grid for large-scale data analysis is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. In this paper we report on the first measurements of R-GMA as part of a monitoring architecture to be used for b...

  5. Active Job Monitoring in Pilots

    Science.gov (United States)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-12-01

    Recent developments in high energy physics (HEP) including multi-core jobs and multi-core pilots require data centres to gain a deep understanding of the system to monitor, design, and upgrade computing clusters. Networking is a critical component. Especially the increased usage of data federations, for example in diskless computing centres or as a fallback solution, relies on WAN connectivity and availability. The specific demands of different experiments and communities, but also the need for identification of misbehaving batch jobs, requires an active monitoring. Existing monitoring tools are not capable of measuring fine-grained information at batch job level. This complicates network-aware scheduling and optimisations. In addition, pilots add another layer of abstraction. They behave like batch systems themselves by managing and executing payloads of jobs internally. The number of real jobs being executed is unknown, as the original batch system has no access to internal information about the scheduling process inside the pilots. Therefore, the comparability of jobs and pilots for predicting run-time behaviour or network performance cannot be ensured. Hence, identifying the actual payload is important. At the GridKa Tier 1 centre a specific tool is in use that allows the monitoring of network traffic information at batch job level. This contribution presents the current monitoring approach and discusses recent efforts and importance to identify pilots and their substructures inside the batch system. It will also show how to determine monitoring data of specific jobs from identified pilots. Finally, the approach is evaluated.

  6. Grid Service for User-Centric Job

    Energy Technology Data Exchange (ETDEWEB)

    Lauret, Jerome

    2009-07-31

    The User Centric Monitoring (UCM) project was aimed at developing a toolkit that provides the Virtual Organization (VO) with tools to build systems that serve a rich set of intuitive job and application monitoring information to the VO’s scientists so that they can be more productive. The tools help collect and serve the status and error information through a Web interface. The proposed UCM toolkit is composed of a set of library functions, a database schema, and a Web portal that will collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than and administrative-centric point of view. The goal is to create a set of tools that can be used to augment grid job scheduling systems, meta-schedulers, applications, and script sets in order to provide the UCM information. The system provides various levels of an application programming interface that is useful through out the Grid environment and at the application level for logging messages, which are combined with the other user-centric monitoring information in a abstracted “data store”. A planned monitoring portal will also dynamically present the information to users in their web browser in a secure manor, which is also easily integrated into any JSR-compliant portal deployment that a VO might employ. The UCM is meant to be flexible and modular in the ways that it can be adopted to give the VO many choices to build a solution that works for them with special attention to the smaller VOs that do not have the resources to implement home-grown solutions.

  7. Grid workflow job execution service 'Pilot'

    Science.gov (United States)

    Shamardin, Lev; Kryukov, Alexander; Demichev, Andrey; Ilyin, Vyacheslav

    2011-12-01

    'Pilot' is a grid job execution service for workflow jobs. The main goal for the service is to automate computations with multiple stages since they can be expressed as simple workflows. Each job is a directed acyclic graph of tasks and each task is an execution of something on a grid resource (or 'computing element'). Tasks may be submitted to any WS-GRAM (Globus Toolkit 4) service. The target resources for the tasks execution are selected by the Pilot service from the set of available resources which match the specific requirements from the task and/or job definition. Some simple conditional execution logic is also provided. The 'Pilot' service is built on the REST concepts and provides a simple API through authenticated HTTPS. This service is deployed and used in production in a Russian national grid project GridNNN.

  8. Grid workflow job execution service 'Pilot'

    International Nuclear Information System (INIS)

    Shamardin, Lev; Kryukov, Alexander; Demichev, Andrey; Ilyin, Vyacheslav

    2011-01-01

    'Pilot' is a grid job execution service for workflow jobs. The main goal for the service is to automate computations with multiple stages since they can be expressed as simple workflows. Each job is a directed acyclic graph of tasks and each task is an execution of something on a grid resource (or 'computing element'). Tasks may be submitted to any WS-GRAM (Globus Toolkit 4) service. The target resources for the tasks execution are selected by the Pilot service from the set of available resources which match the specific requirements from the task and/or job definition. Some simple conditional execution logic is also provided. The 'Pilot' service is built on the REST concepts and provides a simple API through authenticated HTTPS. This service is deployed and used in production in a Russian national grid project GridNNN.

  9. GridCom, Grid Commander: graphical interface for Grid jobs and data management

    International Nuclear Information System (INIS)

    Galaktionov, V.V.

    2011-01-01

    GridCom - the software package for maintenance of automation of access to means of distributed system Grid (jobs and data). The client part, executed in the form of Java-applets, realises the Web-interface access to Grid through standard browsers. The executive part Lexor (LCG Executor) is started by the user in UI (User Interface) machine providing performance of Grid operations

  10. Essential Grid Workflow Monitoring Elements

    Energy Technology Data Exchange (ETDEWEB)

    Gunter, Daniel K.; Jackson, Keith R.; Konerding, David E.; Lee,Jason R.; Tierney, Brian L.

    2005-07-01

    Troubleshooting Grid workflows is difficult. A typicalworkflow involves a large number of components networks, middleware,hosts, etc. that can fail. Even when monitoring data from all thesecomponents is accessible, it is hard to tell whether failures andanomalies in these components are related toa given workflow. For theGrid to be truly usable, much of this uncertainty must be elim- inated.We propose two new Grid monitoring elements, Grid workflow identifiersand consistent component lifecycle events, that will make Gridtroubleshooting easier, and thus make Grids more usable, by simplifyingthe correlation of Grid monitoring data with a particular Gridworkflow.

  11. A framework for job management in the NorduGrid ARC middleware

    DEFF Research Database (Denmark)

    Jensen, Henrik Thostrup; Kleist, Josva; Ryge Leth, Jesper

    2005-01-01

    This paper presents a framework for managing jobs in the NorduGrid ARC middleware. The system introduces a layer between the user and the grid, and acts as a proxy for the user. Jobs are continuously monitored and the system reacts to changes in their status, by invoking plug-ins to handle...... a certain job status. Unlike other job management systems, our is run on the client side, under the control of the user. This eliminates the need for the user to share a proxy credential, which is needed to control jobs. Furthermore the system can be extended by the user, as it is designed as a framework...

  12. A framework for job management in the NorduGrid ARC middleware

    DEFF Research Database (Denmark)

    Jensen, Henrik Thostrup; Kleist, Josva; Ryge Leth, Jesper

    2005-01-01

    a certain job status. Unlike other job management systems, our is run on the client side, under the control of the user. This eliminates the need for the user to share a proxy credential, which is needed to control jobs. Furthermore the system can be extended by the user, as it is designed as a framework......This paper presents a framework for managing jobs in the NorduGrid ARC middleware. The system introduces a layer between the user and the grid, and acts as a proxy for the user. Jobs are continuously monitored and the system reacts to changes in their status, by invoking plug-ins to handle...

  13. Secondary emission monitor (SEM) grids.

    CERN Multimedia

    Patrice Loïez

    2002-01-01

    A great variety of Secondary Emission Monitors (SEM) are used all over the PS Complex. At other accelerators they are also called wire-grids, harps, etc. They are used to measure beam density profiles (from which beam size and emittance can be derived) in single-pass locations (not on circulating beams). Top left: two individual wire-planes. Top right: a combination of a horizontal and a vertical wire plane. Bottom left: a ribbon grid in its frame, with connecting wires. Bottom right: a SEM-grid with its insertion/retraction mechanism.

  14. ATLAS job monitoring in the Dashboard Framework

    OpenAIRE

    Sargsyan, L; Andreeva, J; Campana, S; Karavakis, E; Kokoszkiewicz, L; Saiz, P; Schovancova, J; Tuckett, D

    2012-01-01

    Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs. The Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring. Dashboard job monitoring for ATLAS combines information from PanDA job processing database, Production system database and monitoring information from jobs submitted through GANGA to Work...

  15. 84-KILOMETER RADIOLOGICAL MONITORING GRID

    International Nuclear Information System (INIS)

    L. Roe

    2000-01-01

    The purpose of this calculation is to document the development of a radial grid that is suitable for evaluating the pathways and potential impacts of a release of radioactive materials to the environment within a distance of 84 kilometers (km). The center of the grid represents an approximate location from which a potential release of radioactive materials could originate. The center is located on Nevada State Plane coordinates Northing 765621.5, and Easting 570433.6, which is on the eastern side of Exile Hill at the Yucca Mountain site. The North Portal Pad is located over this point. The grid resulting from this calculation is intended for use primarily in the Radiological Monitoring Program (RadMP). This grid also is suitable for use in Biosphere Modeling and other Yucca Mountain Site Characteristic Project (YMP) activities that require the evaluation of data referenced by spatial or geographic coordinates

  16. ATLAS job monitoring in the Dashboard Framework

    CERN Document Server

    Andreeva, J; The ATLAS collaboration; Karavakis, E; Kokoszkiewicz, L; Saiz, P; Sargsyan, L; Schovancova, J; Tuckett, D

    2012-01-01

    Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs. The Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring. Dashboard job monitoring for ATLAS combines information from PanDA job processing database, Production system database and monitoring information from jobs submitted through GANGA to Workload Management System (WMS) or local batch systems. Usage of Dashboard-based job monitoring applications will decrease load on the PanDA database and overcome scale limitations in PanDA monitoring caused by the short job rotation cycle in the PanDA database. Aggregation of the task/job metrics from different sources provides complete view of job processing activity in ATLAS scope.

  17. ATLAS job monitoring in the Dashboard Framework

    International Nuclear Information System (INIS)

    Andreeva, J; Campana, S; Karavakis, E; Kokoszkiewicz, L; Saiz, P; Tuckett, D; Sargsyan, L; Schovancova, J

    2012-01-01

    Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs. The Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring. Dashboard job monitoring for ATLAS combines information from the PanDA job processing database, Production system database and monitoring information from jobs submitted through GANGA to Workload Management System (WMS) or local batch systems. Usage of Dashboard-based job monitoring applications will decrease load on the PanDA database and overcome scale limitations in PanDA monitoring caused by the short job rotation cycle in the PanDA database. Aggregation of the task/job metrics from different sources provides complete view of job processing activity in ATLAS scope.

  18. Monitoring in a grid cluster

    International Nuclear Information System (INIS)

    Crooks, David; Mitchell, Mark; Roy, Gareth; Skipsey, Samuel Cadellin; Britton, David; Purdie, Stuart

    2014-01-01

    The monitoring of a grid cluster (or of any piece of reasonably scaled IT infrastructure) is a key element in the robust and consistent running of that site. There are several factors which are important to the selection of a useful monitoring framework, which include ease of use, reliability, data input and output. It is critical that data can be drawn from different instrumentation packages and collected in the framework to allow for a uniform view of the running of a site. It is also very useful to allow different views and transformations of this data to allow its manipulation for different purposes, perhaps unknown at the initial time of installation. In this context, we present the findings of an investigation of the Graphite monitoring framework and its use at the ScotGrid Glasgow site. In particular, we examine the messaging system used by the framework and means to extract data from different tools, including the existing framework Ganglia which is in use at many sites, in addition to adapting and parsing data streams from external monitoring frameworks and websites.

  19. Job execution in virtualized runtime environments in grid

    International Nuclear Information System (INIS)

    Shamardin, Lev; Demichev, Andrey; Gorbunov, Ilya; Ilyin, Slava; Kryukov, Alexander

    2010-01-01

    Grid systems are used for calculations and data processing in various applied areas such as biomedicine, nanotechnology and materials science, cosmophysics and high energy physics as well as in a number of industrial and commercial areas. Traditional method of execution of jobs in grid is running jobs directly on the cluster nodes. This puts restrictions on the choice of the operational environment to the operating system of the node and also does not allow to enforce resource sharing policies or jobs isolation nor guarantee minimal level of available system resources. We propose a new approach to running jobs on the cluster nodes when each grid job runs in its own virtual environment. This allows to use different operating systems for different jobs on the same nodes in cluster, provides better isolation between running jobs and allows to enforce resource sharing policies. The implementation of the proposed approach was made in the framework of gLite middleware of the EGEE/WLCG project and was successfully tested in SINP MSU. The implementation is transparent for the grid user and allows to submit binaries compiled for various operating systems using exactly the same gLite interface. Virtual machine images with the standard gLite worker node software and sample MS Windows execution environment were created.

  20. Jobs masonry in LHCb with elastic Grid Jobs

    CERN Document Server

    Stagni, F

    2015-01-01

    In any distributed computing infrastructure, a job is normally forbidden to run for an indefinite amount of time. This limitation is implemented using different technologies, the most common one being the CPU time limit implemented by batch queues. It is therefore important to have a good estimate of how much CPU work a job will require: otherwise, it might be killed by the batch system, or by whatever system is controlling the jobs' execution. In many modern interwares, the jobs are actually executed by pilot jobs, that can use the whole available time in running multiple consecutive jobs. If at some point the available time in a pilot is too short for the execution of any job, it should be released, while it could have been used efficiently by a shorter job. Within LHCbDIRAC, the LHCb extension of the DIRAC interware, we developed a simple way to fully exploit computing capabilities available to a pilot, even for resources with limited time capabilities, by adding elasticity to production MonteCarlo (MC) si...

  1. Job Flow Distribution and Ranked Jobs Scheduling in Grid Virtual Organizations

    CERN Document Server

    Toporkov, Victor; Tselishchev, Alexey; Yemelyanov, Dmitry; Potekhin, Petr

    2015-01-01

    In this work, we consider the problems of job flow distribution and ranked job framework forming within a model of cycle scheduling in Grid virtual organizations. The problem of job flow distribution is solved in terms of jobs and computing resource domains compatibility. A coefficient estimating such compatibility is introduced and studied experimentally. Two distribution strategies are suggested. Job framework forming is justified with such quality of service indicators as an average job execution time, a number of required scheduling cycles, and a number of job execution declines. Two methods for job selection and scheduling are proposed and compared: the first one is based on the knapsack problem solution, while the second one utilizes the mentioned compatibility coefficient. Along with these methods we present experimental results demonstrating the efficiency of proposed approaches and compare them with random job selection.

  2. Automated Grid Monitoring for LHCb through HammerCloud

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The HammerCloud system is used by CERN IT to monitor the status of the Worldwide LHC Computing Grid (WLCG). HammerCloud automatically submits jobs to WLCG computing resources, closely replicating the workflow of Grid users (e.g. physicists analyzing data). This allows computation nodes and storage resources to be monitored, software to be tested (somewhat like continuous integration), and new sites to be stress tested with a heavy job load before commissioning. The HammerCloud system has been in use for ATLAS and CMS experiments for about five years. This summer's work involved porting the HammerCloud suite of tools to the LHCb experiment. The HammerCloud software runs functional tests and provides data visualizations. HammerCloud's LHCb variant is written in Python, using the Django web framework and Ganga/DIRAC for job management.

  3. A Configurable Job Submission and Scheduling System for the Grid

    OpenAIRE

    Kasarkod, Jeevak

    2003-01-01

    Grid computing provides the necessary infrastructure to pool together diverse and distributed resources interconnected by networks to provide a unified virtual computing resource view to the user. One of the important responsibilities of the grid software is resource management and techniques to allow the user to make optimal use of the resources for executing applications. In addition to the goals of minimizing job completion time and achieving good throughput there are other minimum require...

  4. Smart Grid Cybersecurity: Job Performance Model Report

    Energy Technology Data Exchange (ETDEWEB)

    O' Neil, Lori Ross; Assante, Michael; Tobey, David

    2012-08-01

    This is the project report to DOE OE-30 for the completion of Phase 1 of a 3 phase report. This report outlines the work done to develop a smart grid cybersecurity certification. This work is being done with the subcontractor NBISE.

  5. Data location-aware job scheduling in the grid. Application to the GridWay metascheduler

    International Nuclear Information System (INIS)

    Delgado Peris, Antonio; Hernandez, Jose; Huedo, Eduardo; Llorente, Ignacio M

    2010-01-01

    Grid infrastructures constitute nowadays the core of the computing facilities of the biggest LHC experiments. These experiments produce and manage petabytes of data per year and run thousands of computing jobs every day to process that data. It is the duty of metaschedulers to allocate the tasks to the most appropriate resources at the proper time. Our work reviews the policies that have been proposed for the scheduling of grid jobs in the context of very data-intensive applications. We indicate some of the practical problems that such models will face and describe what we consider essential characteristics of an optimum scheduling system: aim to minimise not only job turnaround time but also data replication, flexibility to support different virtual organisation requirements and capability to coordinate the tasks of data placement and job allocation while keeping their execution decoupled. These ideas have guided the development of an enhanced prototype for GridWay, a general purpose metascheduler, part of the Globus Toolkit and member of the EGEE's RESPECT program. Current GridWay's scheduling algorithm is unaware of data location. Our prototype makes it possible for job requests to set data needs not only as absolute requirements but also as functions for resource ranking. As our tests show, this makes it more flexible than currently used resource brokers to implement different data-aware scheduling algorithms.

  6. Biomedical applications on the GRID efficient management of parallel jobs

    CERN Document Server

    Moscicki, Jakub T; Lee Hurng Chun; Lin, S C; Pia, Maria Grazia

    2004-01-01

    Distributed computing based on the Master-Worker and PULL interaction model is applicable to a number of applications in high energy physics, medical physics and bio-informatics. We demonstrate a realistic medical physics use-case of a dosimetric system for brachytherapy using distributed Grid resources. We present the efficient techniques for running parallel jobs in a case of the BLAST, a gene sequencing application, as well as for the Monte Carlo simulation based on Geant4. We present a strategy for improving the runtime performance and robustness of the jobs as well as for the minimization of the development time needed to migrate the applications to a distributed environment.

  7. Job system generation in grid taking into account user preferences

    Directory of Open Access Journals (Sweden)

    D. M. Yemelyanov

    2016-01-01

    Full Text Available Distributed computing environments like Grid are characterized by heterogeneity, low cohesion and dynamic structure of computing nodes. This is why the task of resource scheduling in such environments is complex. Different approaches to job scheduling in grid exist. Some of them use economic principles. Economic approaches to scheduling have shown their efficiency. One of such approaches is cyclic scheduling scheme which is considered in this paper.Cyclic scheduling scheme takes into account the preferences of computing environment users by means of an optimization criterion, which is included in the resource request. Besides, the scheme works cyclically by scheduling a certain job batch at each scheduling step. This is why there is a preliminary scheduling step which is job batch generation.The purpose of this study was to estimate the infl uence of job batch structure by the user criterion on the degree of its satisfaction. In other words we had to find the best way to form the batch with relation to the user optimization criterion. For example if it is more efficient to form the batch with jobs with the same criterion value or with different criterion values. Also we wanted to find the combination of criterion values which would give the most efficient scheduling results.To achieve this purpose an experiment in a simulation environment was conducted. The experiment consisted of scheduling of job batches with different values of the user criterion, other parameters of the resource request and the characteristics of the computing environment being the same. Three job batch generation strategies were considered. In the first strategy the batch consisted of jobs with the same criterion value. In the second strategy the batch consisted of jobs with all the considered criteria equally likely. The third strategy was similar to the second one, but only two certain criteria were considered. The third strategy was considered in order to find the most

  8. Monitoring system for the GRID Monte Carlo mass production in the H1 experiment at DESY

    International Nuclear Information System (INIS)

    Bystritskaya, Elena; Fomenko, Alexander; Gogitidze, Nelly; Lobodzinski, Bogdan

    2014-01-01

    The H1 Virtual Organization (VO), as one of the small VOs, employs most components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers (WMSs), CernVM File System (CVMFS) available to the VO HONE and local GRID User Interfaces (UIs). The general principle of monitoring GRID elements is based on the execution of short test jobs on different CE queues using submission through various WMSs and directly to the CREAM-CEs as well. Real H1 MC Production jobs with a small number of events are used to perform the tests. Test jobs are periodically submitted into GRID queues, the status of these jobs is checked, output files of completed jobs are retrieved, the result of each job is analyzed and the waiting time and run time are derived. Using this information, the status of the GRID elements is estimated and the most suitable ones are included in the automatically generated configuration files for use in the H1 MC production. The monitoring system allows for identification of problems in the GRID sites and promptly reacts on it (for example by sending GGUS (Global Grid User Support) trouble tickets). The system can easily be adapted to identify the optimal resources for tasks other than MC production, simply by changing to the relevant test jobs. The monitoring system is written mostly in Python and Perl with insertion of a few shell scripts. In addition to the test monitoring system we use information from real production jobs to monitor the availability and quality of the GRID resources. The monitoring tools register the number of job resubmissions, the percentage of failed and finished jobs relative to all jobs on the CEs and determine the average values of waiting and running time for the

  9. Application of remote debugging techniques in user-centric job monitoring

    International Nuclear Information System (INIS)

    Dos Santos, T; Mättig, P; Harenberg, T; Volkmer, F; Beermann, T; Kalinin, S; Ahrens, R; Wulff, N

    2012-01-01

    With the Job Execution Monitor, a user-centric job monitoring software developed at the University of Wuppertal and integrated into the job brokerage systems of the WLCG, job progress and grid worker node health can be supervised in real time. Imminent error conditions can thus be detected early by the submitter and countermeasures can be taken. Grid site admins can access aggregated data of all monitored jobs to infer the site status and to detect job misbehaviour. To remove the last 'blind spot' from this monitoring, a remote debugging technique based on the GNU C compiler suite was developed and integrated into the software; its design concept and architecture is described in this paper and its application discussed.

  10. A Mediated Definite Delegation Model allowing for Certified Grid Job Submission

    CERN Document Server

    Schreiner, Steffen; Grigoras, Costin; Litmaath, Maarten

    2012-01-01

    Grid computing infrastructures need to provide traceability and accounting of their users" activity and protection against misuse and privilege escalation. A central aspect of multi-user Grid job environments is the necessary delegation of privileges in the course of a job submission. With respect to these generic requirements this document describes an improved handling of multi-user Grid jobs in the ALICE ("A Large Ion Collider Experiment") Grid Services. A security analysis of the ALICE Grid job model is presented with derived security objectives, followed by a discussion of existing approaches of unrestricted delegation based on X.509 proxy certificates and the Grid middleware gLExec. Unrestricted delegation has severe security consequences and limitations, most importantly allowing for identity theft and forgery of delegated assignments. These limitations are discussed and formulated, both in general and with respect to an adoption in line with multi-user Grid jobs. Based on the architecture of the ALICE...

  11. Minimizing draining waste through extending the lifetime of pilot jobs in Grid environments

    International Nuclear Information System (INIS)

    Sfiligoi, I; Martin, T; Würthwein, F; Bockelman, B P; Bradley, D C

    2014-01-01

    The computing landscape is moving at an accelerated pace to many-core computing. Nowadays, it is not unusual to get 32 cores on a single physical node. As a consequence, there is increased pressure in the pilot systems domain to move from purely single-core scheduling and allow multi-core jobs as well. In order to allow for a gradual transition from single-core to multi-core user jobs, it is envisioned that pilot jobs will have to handle both kinds of user jobs at the same time, by requesting several cores at a time from Grid providers and then partitioning them between the user jobs at runtime. Unfortunately, the current Grid ecosystem only allows for relatively short lifetime of pilot jobs, requiring frequent draining, with the relative waste of compute resources due to varying lifetimes of the user jobs. Significantly extending the lifetime of pilot jobs is thus highly desirable, but must come without any adverse effects for the Grid resource providers. In this paper we present a mechanism, based on communication between the pilot jobs and the Grid provider, that allows for pilot jobs to run for extended periods of time when there are available resources, but also allows the Grid provider to reclaim the resources in a short amount of time when needed. We also present the experience of running a prototype system using the above mechanism on a few US-based Grid sites.

  12. Monitoring And Analyzing Distributed Cluster Performance And Statistics Of Atlas Job Flow

    CERN Document Server

    Ramprakash, S

    2005-01-01

    The ATLAS experiment is a High Energy Physics experiment that utilizes the services of Grid3 now migrating to the Open Science Grid (OSG). This thesis provides monitoring and analysis of performance and statistical data from individual distributed clusters that combine to form the ATLAS Grid and will ultimately be used to make scheduling decisions on this Grid. The system developed in this thesis uses a layered architecture such that predicted future developments or changes brought to the existing Grid infrastructure can easily utilize this work with minimum or no changes. The starting point of the system is based on the existing scheduling that is being done manually for ATLAS job flow. We have provided additional functionality based on the requirements of the High Energy Physics ATLAS team of physicists at UTA. The system developed in this thesis has successfully monitored and analyzed distributed cluster performance at three sites and is waiting for access to monitor data from three more sites. (Abstract s...

  13. Exploring virtualisation tools with a new virtualisation provisioning method to test dynamic grid environments for ALICE grid jobs over ARC grid middleware

    International Nuclear Information System (INIS)

    Wagner, B; Kileng, B

    2014-01-01

    The Nordic Tier-1 centre for LHC is distributed over several computing centres. It uses ARC as the internal computing grid middleware. ALICE uses its own grid middleware AliEn to distribute jobs and the necessary software application stack. To make use of most of the AliEn infrastructure and software deployment methods for running ALICE grid jobs on ARC, we are investigating different possible virtualisation technologies. For this a testbed and possible framework for bridging different middleware systems is under development. It allows us to test a variety of virtualisation methods and software deployment technologies in the form of different virtual machines.

  14. Job search monitoring and assistance for the unemployed

    OpenAIRE

    Marinescu, Ioana E.

    2017-01-01

    In many countries, reducing unemployment is among the most important policy goals. In this context, monitoring job search by the unemployed and providing job search assistance can play a crucial role. However, more and more stringent monitoring and sanctions are not a panacea. Policymakers must consider possible downsides, such as unemployed people accepting less stable and lower-paying jobs. Tying “moderate” monitoring to job search assistance may be the essential ingredient to make this app...

  15. Automated Grid Monitoring for the LHCb Experiment Through HammerCloud

    CERN Document Server

    Dice, Bradley

    2015-01-01

    The HammerCloud system is used by CERN IT to monitor the status of the Worldwide LHC Computing Grid (WLCG). HammerCloud automatically submits jobs to WLCG computing resources, closely replicating the workflow of Grid users (e.g. physicists analyzing data). This allows computation nodes and storage resources to be monitored, software to be tested (somewhat like continuous integration), and new sites to be stress tested with a heavy job load before commissioning. The HammerCloud system has been in use for ATLAS and CMS experiments for about five years. This summer's work involved porting the HammerCloud suite of tools to the LHCb experiment. The HammerCloud software runs functional tests and provides data visualizations. HammerCloud's LHCb variant is written in Python, using the Django web framework and Ganga/DIRAC for job management.

  16. Low-cost wireless voltage & current grid monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Hines, Jacqueline [SenSanna Inc., Arnold, MD (United States)

    2016-12-31

    This report describes the development and demonstration of a novel low-cost wireless power distribution line monitoring system. This system measures voltage, current, and relative phase on power lines of up to 35 kV-class. The line units operate without any batteries, and without harvesting energy from the power line. Thus, data on grid condition is provided even in outage conditions, when line current is zero. This enhances worker safety by detecting the presence of voltage and current that may appear from stray sources on nominally isolated lines. Availability of low-cost power line monitoring systems will enable widespread monitoring of the distribution grid. Real-time data on local grid operating conditions will enable grid operators to optimize grid operation, implement grid automation, and understand the impact of solar and other distributed sources on grid stability. The latter will enable utilities to implement eneygy storage and control systems to enable greater penetration of solar into the grid.

  17. The Grid[Way] Job Template Manager, a tool for parameter sweeping

    Science.gov (United States)

    Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.

    2011-04-01

    Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of

  18. Dashboard Task Monitor for Managing ATLAS User Analysis on the Grid

    Science.gov (United States)

    Sargsyan, L.; Andreeva, J.; Jha, M.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Schovancova, J.; Tuckett, D.; Atlas Collaboration

    2014-06-01

    The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.

  19. Dashboard task monitor for managing ATLAS user analysis on the grid

    International Nuclear Information System (INIS)

    Sargsyan, L; Andreeva, J; Karavakis, E; Saiz, P; Tuckett, D; Jha, M; Kokoszkiewicz, L; Schovancova, J

    2014-01-01

    The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.

  20. Simple LDAP Schemas for Grid Monitoring

    Science.gov (United States)

    Smith, Warren; Gunter, Dan; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The purpose of this document is to provide an initial definition of the data we need in a directory service or database so that we can implement a performance monitoring testbed. To begin with, this document describes how to represent producers of events and event schemes. The representation of producers is simple and does not contain information such as who has access to the events and what protocols can be used to access the events. In the future, we will define how to describe consumers of events and add details to our representations. A popular choice for a directory service or database for grid computing is a distributed directory service that is accessed using the Lightweight Directory Access Protocol (LDAP). This document uses LDAP terminology, schemes, and formats to represent the directory service schemes.

  1. Efficient Monitoring of CRAB Jobs at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Silva, J. M.D. [Sao Paulo, IFT; Balcas, J. [Caltech; Belforte, S. [INFN, Trieste; Ciangottini, D. [INFN, Perugia; Mascheroni, M. [Fermilab; Rupeika, E. A. [Vilnius U.; Ivanov, T. T. [Sofiya U.; Hernandez, J. M. [Madrid, CIEMAT; Vaandering, E. [Fermilab

    2017-11-22

    CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.

  2. Efficient monitoring of CRAB jobs at CMS

    Science.gov (United States)

    Silva, J. M. D.; Balcas, J.; Belforte, S.; Ciangottini, D.; Mascheroni, M.; Rupeika, E. A.; Ivanov, T. T.; Hernandez, J. M.; Vaandering, E.

    2017-10-01

    CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.

  3. Advances in Monitoring of Grid Services in WLCG

    CERN Document Server

    Casey, J; Neilson, I

    2008-01-01

    During 2006, the Worldwide LHC Computing Grid Project (WLCG) constituted several working groups in the area of fabric and application monitoring with the mandate of improving the reliability and availability of the grid infrastructure through improved monitoring of the grid fabric. This paper discusses the work of one of these groups: the "Grid Service Monitoring Working Group". This group has the aim to evaluate the existing monitoring system and create a coherent architecture that would let the existing system run, while increasing the quality and quantity of monitoring information gathered. We describe the stakeholders in this project, and focus in particular on the needs of the site administrators, which were not well satisfied by existing solutions. Several standards for service metric gathering and grid monitoring data exchange, and the place of each in the architecture will be shown. Finally we will describe the use of a Nagios-based prototype deployment for validation of our ideas, and the progress on...

  4. Processing moldable tasks on the grid: Late job binding with lightweight user-level overlay

    CERN Document Server

    Moscicki, J T; Sloot, P M A; Lamanna, M

    2011-01-01

    Independent observations and everyday user experience indicate that performance and reliability of large grid infrastructures may suffer from large and unpredictable variations. In this paper we study the impact of the job queuing time on processing of moldable tasks which are commonly found in large-scale production grids. We use the mean value and variance of makespan as the quality of service indicators. We develop a general task processing model to provide a quantitative comparison between two models: early and late job binding in a user-level overlay applied to the EGEE Grid infrastructure. We find that the late-binding model effectively defines a transformation of the distribution of makespan according to the Central Limit Theorem. As demonstrated by Monte Carlo simulations using real job traces, this transformation allows to substantially reduce the mean value and variance of makespan. For certain classes of applications task granularity may be adjusted such that a speedup of an order of magnitude or m...

  5. Next steps in the evolution of GridICE: a monitoring tool for grid systems

    International Nuclear Information System (INIS)

    Andreozzi, S; Fattibene, E; Misurelli, G; Aiftimiei, C; Pra, S D; Fantinel, S; Cuscela, G; Donvito, G; Dudhalkar, V; Maggi, G; Pierro, A

    2008-01-01

    GridICE is a monitoring tool for Grid systems that support the operations and management activities of different types of users: site administrators, VO managers, Grid operators and end-users. The tool is designed to deal with the dynamics, diversity and geographical distribution of the observed resources. It is continuously evolved driven by user requirements. The purpose of this paper is to disseminate the evolution strategy in order to solicit for a community feedback

  6. GridICE: monitoring the user/application activities on the grid

    International Nuclear Information System (INIS)

    Aiftimiei, C; Pra, S D; Andreozzi, S; Fattibene, E; Misurelli, G; Cuscela, G; Donvito, G; Dudhalkar, V; Maggi, G; Pierro, A; Fantinel, S

    2008-01-01

    The monitoring of the grid user activity and application performance is extremely useful to plan resource usage strategies particularly in cases of complex applications. Large VOs, such as the LHC VOs, do their monitoring by means of dashboards. Other VOs or communities, like for example the BioinfoGRID one, are characterized by a greater diversification of the application types: so the effort to provide a dashboard like monitor is particularly heavy. The main theme of this paper is to show the improvements introduced in GridICE, a web tool built to provides an almost complete grid monitoring. These recent improvements allows GridICE to provide new reports on the resources usage with details of the VOMS groups, roles and users. By accessing the GridICE web pages, the grid user can get all information that is relevant to keep track of his activity on the grid. In the same way, the activity of a VOMS group can be distinguished from the activity of the entire VO. In this paper we briefly talk about the features and advantages of this approach and, after discussing the requirements, we describe the software solutions, middleware and prerequisite to manage and retrieve the user's credentials

  7. GStat 2.0: Grid Information System Status Monitoring

    OpenAIRE

    Field, L; Huang, J; Tsai, M

    2009-01-01

    Grid Information Systems are mission-critical components in today's production grid infrastructures. They enable users, applications and services to discover which services exist in the infrastructure and further information about the service structure and state. It is therefore important that the information system components themselves are functioning correctly and that the information content is reliable. Grid Status (GStat) is a tool that monitors the structural integrity of the EGEE info...

  8. Processing moldable tasks on the grid: late job binding with lightweight user-level overlay

    NARCIS (Netherlands)

    Mościcki, J.T.; Lamanna, M.; Bubak, M.; Sloot, P.M.A.

    2011-01-01

    Independent observations and everyday user experience indicate that performance and reliability of large grid infrastructures may suffer from large and unpredictable variations. In this paper we study the impact of the job queuing time on processing of moldable tasks which are commonly found in

  9. Data-Driven Approaches for Monitoring of Distribution Grids

    OpenAIRE

    Ferdowsi, Mohsen

    2017-01-01

    Electric power distribution systems are undergoing dramatic changes due to the ever-increasing power generation at the medium and low voltage grids and the large-scale grid integration of electric vehicles. This has led to the operation of distribution systems in a very different way compared to what they were originally designed for. Considering that little monitoring equipment is traditionally embedded in these systems, Distribution System Operators (DSOs) need cost-efficient monitoring met...

  10. Enhanced Operation of Electricity Distribution Grids Through Smart Metering PLC Network Monitoring, Analysis and Grid Conditioning

    Directory of Open Access Journals (Sweden)

    Iker Urrutia

    2013-01-01

    Full Text Available Low Voltage (LV electricity distribution grid operations can be improved through a combination of new smart metering systems’ capabilities based on real time Power Line Communications (PLC and LV grid topology mapping. This paper presents two novel contributions. The first one is a new methodology developed for smart metering PLC network monitoring and analysis. It can be used to obtain relevant information from the grid, thus adding value to existing smart metering deployments and facilitating utility operational activities. A second contribution describes grid conditioning used to obtain LV feeder and phase identification of all connected smart electric meters. Real time availability of such information may help utilities with grid planning, fault location and a more accurate point of supply management.

  11. Renewable Energy Jobs. Status, prospects and policies. Biofuels and grid-connected electricity generation

    Energy Technology Data Exchange (ETDEWEB)

    Lucas, H.; Ferroukhi, R. [et al.] [IRENA Policy Advisory Services and Capacity Building Directorate, Abu Dhabi (United Arab Emirates)

    2012-01-15

    Over the past years, interest has grown in the potential for the renewable energy industry to create jobs. Governments are seeking win-win solutions to the dual challenge of high unemployment and climate change. By 2010, USD 51 billion had been pledged to renewables in stimulus packages, and by early 2011 there were 119 countries with some kind of policy target and/or support policy for renewable energy, such as feed-in tariffs, quota obligations, favourable tax treatment and public loans or grants, many of which explicitly target job creation as a policy goal. Policy-makers in many countries are now designing renewable energy policies that aim to create new jobs, build industries and benefit particular geographic areas. But how much do we know for certain about the job creation potential for renewable energy? This working paper aims to provide an overview of current knowledge on five questions: (1) How can jobs in renewable energy be characterised?; (2) How are they shared out across the technology value chain and what skill levels are required?; (3) How many jobs currently exist and where are they in the world?; (4) How many renewable energy jobs could there be in the future?; and (5) What policy frameworks can be used to promote employment benefits from renewable energy? This paper focuses on grid-connected electricity generation technologies and biofuels. Since the employment potential of off-grid applications is large, it will be covered by a forthcoming study by IRENA on job creation in the context of energy access, based on a number of case studies.

  12. Advances in monitoring of grid services in WLCG

    Energy Technology Data Exchange (ETDEWEB)

    Casey, J; Imamagic, E; Neilson, I [European Organization for Nuclear Research (CERN), CH-1211, Geneve 23 (Switzerland); University of Zagreb University Computing Centre (SRCE), Josipa Marohnica 5, 10000 Zagreb (Croatia); European Organization for Nuclear Research (CERN), CH-1211, Geneve 23 (Switzerland)], E-mail: james.casey@cern.ch, E-mail: eimamagi@srce.hr, E-mail: ian.neilson@cern.ch

    2008-07-15

    During 2006, the Worldwide LHC Computing Grid Project (WLCG) constituted several working groups in the area of fabric and application monitoring with the mandate of improving the reliability and availability of the grid infrastructure through improved monitoring of the grid fabric. This paper discusses the work of one of these groups: the 'Grid Service Monitoring Working Group'. This group has the aim to evaluate the existing monitoring system and create a coherent architecture that would let the existing system run, while increasing the quality and quantity of monitoring information gathered. We describe the stakeholders in this project, and focus in particular on the needs of the site administrators, which were not well satisfied by existing solutions. Several standards for service metric gathering and grid monitoring data exchange, and the place of each in the architecture will be shown. Finally we will describe the use of a Nagios-based prototype deployment for validation of our ideas, and the progress on turning this prototype into a production-ready system.

  13. Optimal taxation and welfare benefits with monitoring of job search

    NARCIS (Netherlands)

    Boone, J.; Bovenberg, A.L.

    2013-01-01

    In order to investigate the interaction between tax policy, welfare benefits, the government technology for monitoring and sanctioning inadequate search, workfare, and externalities from work, we incorporate endogenous job search and involuntary unemployment into a model of optimal nonlinear income

  14. Efficient job handling in the GRID short deadline, interactivity, fault tolerance and parallelism

    CERN Document Server

    Moscicki, Jakub

    2006-01-01

    The major GRID infastructures are designed mainly for batch-oriented computing with coarse-grained jobs and relatively high job turnaround time. However many practical applications in natural and physical sciences may be easily parallelized and run as a set of smaller tasks which require little or no synchronization and which may be scheduled in a more efficient way. The Distributed Analysis Environment Framework (DIANE), is a Master-Worker execution skeleton for applications, which complements the GRID middleware stack. Automatic failure recovery and task dispatching policies enable an easy customization of the behaviour of the framework in a dynamic and non-reliable computing environment. We demonstrate the experience of using the framework with several diverse real-life applications, including Monte Carlo Simulation, Physics Data Analysis and Biotechnology. The interfacing of existing sequential applications from the point of view of non-expert user is made easy, also for legacy applications. We analyze th...

  15. The swiss army knife of job submission tools: grid-control

    Science.gov (United States)

    Stober, F.; Fischer, M.; Schleper, P.; Stadie, H.; Garbers, C.; Lange, J.; Kovalchuk, N.

    2017-10-01

    grid-control is a lightweight and highly portable open source submission tool that supports all common workflows in high energy physics (HEP). It has been used by a sizeable number of HEP analyses to process tasks that sometimes consist of up to 100k jobs. grid-control is built around a powerful plugin and configuration system, that allows users to easily specify all aspects of the desired workflow. Job submission to a wide range of local or remote batch systems or grid middleware is supported. Tasks can be conveniently specified through the parameter space that will be processed, which can consist of any number of variables and data sources with complex dependencies on each other. Dataset information is processed through a configurable pipeline of dataset filters, partition plugins and partition filters. The partition plugins can take the number of files, size of the work units, metadata or combinations thereof into account. All changes to the input datasets or variables are propagated through the processing pipeline and can transparently trigger adjustments to the parameter space and the job submission. While the core functionality is completely experiment independent, full integration with the CMS computing environment is provided by a small set of plugins.

  16. Unified Monitoring Architecture for IT and Grid Services

    Science.gov (United States)

    Aimar, A.; Aguado Corman, A.; Andrade, P.; Belov, S.; Delgado Fernandez, J.; Garrido Bear, B.; Georgiou, M.; Karavakis, E.; Magnoni, L.; Rama Ballesteros, R.; Riahi, H.; Rodriguez Martinez, J.; Saiz, P.; Zolnai, D.

    2017-10-01

    This paper provides a detailed overview of the Unified Monitoring Architecture (UMA) that aims at merging the monitoring of the CERN IT data centres and the WLCG monitoring using common and widely-adopted open source technologies such as Flume, Elasticsearch, Hadoop, Spark, Kibana, Grafana and Zeppelin. It provides insights and details on the lessons learned, explaining the work performed in order to monitor the CERN IT data centres and the WLCG computing activities such as the job processing, data access and transfers, and the status of sites and services.

  17. A monitoring sensor management system for grid environments

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian; Crowley, Brian; Gunter, Dan; Lee, Jason; Thompson, Mary

    2001-06-01

    Large distributed systems, such as computational grids,require a large amount of monitoring data be collected for a variety oftasks, such as fault detection, performance analysis, performance tuning,performance prediction and scheduling. Ensuring that all necessarymonitoring is turned on and that the data is being collected can be avery tedious and error-prone task. We have developed an agent-basedsystem to automate the execution of monitoring sensors and the collectionof event data.

  18. Ganga: User-friendly Grid job submission and management tool for LHC and beyond

    International Nuclear Information System (INIS)

    Vanderster, D C; Gaidoz, B; Maier, A; Moscicki, J T; Muraru, A; Brochu, F; Cowan, G; Egede, U; Reece, W; Williams, M; Elmsheuser, J; Harrison, K; Slater, M; Tan, C L; Lee, H C; Liko, D; Pajchel, K; Samset, B; Soroko, A

    2010-01-01

    Ganga has been widely used for several years in ATLAS, LHCb and a handful of other communities. Ganga provides a simple yet powerful interface for submitting and managing jobs to a variety of computing backends. The tool helps users configuring applications and keeping track of their work. With the major release of version 5 in summer 2008, Ganga's main user-friendly features have been strengthened. Examples include a new configuration interface, enhanced support for job collections, bulk operations and easier access to subjobs. In addition to the traditional batch and Grid backends such as Condor, LSF, PBS, gLite/EDG a point-to-point job execution via ssh on remote machines is now supported. Ganga is used as an interactive job submission interface for end-users, and also as a job submission component for higher-level tools. For example GangaRobot is used to perform automated, end-to-end testing of distributed data analysis. Ganga comes with an extensive test suite covering more than 350 test cases. The development model involves all active developers in the release management shifts which is an important and novel approach for the distributed software collaborations. Ganga 5 is a mature, stable and widely-used tool with long-term support from the HEP community.

  19. Ganga - an Optimiser and Front-End for Grid Job Submission (Demo)

    CERN Document Server

    Maier, A; Egede, U; Elmsheuser, J; Gaidioz, B; Harrison, K; Hurng-Chun Lee; Liko, D; Mosckicki, J; Muraru, A; Romanovsky, V; Soroko, A; Tan, C L; Koblitz, B

    2007-01-01

    The presentation will introduce the Ganga job-management system (http://cern.ch/ganga), developed as an ATLAS/LHCb common project. The main goal of Ganga is to provide a simple and consistent way of preparing, organising and executing analysis tasks, allowing physicists to concentrate on the algorithmic part without having to worry about technical details. Ganga provides a clean Python API that reduces and simplifies the work involved in preparing an application, organizing the submission, and gathering results. Technical details of submitting a job to the Grid, for example the preparation of a job-description file, are factored out and taken care of transparently by the system. By changing the parameter that identifies the execution backend, a user can trivially switch between running an application on a portable PC, running higherstatistics tests on a local batch system, and analysing all available statistics on the Grid. Although Ganga is being developed for LHCb and ATLAS, it is not limited to use with HE...

  20. Grid Monitoring and Advanced Control of Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Timbus, Adrian Vasile

    and adding more features to the control of distributed power generation systems (DPGS) arises. As a consequence, this thesis focuses on grid monitoring methods and possible approaches in control in order to obtain a more reliable and  exible power generation system during normal and faulty grid conditions......The movement towards a clean technology for energy production and the constraints in reducing the CO2 emissions are some factors facilitating the growth of distributed power generation systems based on renewable energy resources. Consequently, large penetration of distributed generators has been...... reported in some countries creating concerns about power system stability. This leads to a continuous evolution of grid interconnection requirements towards a better controllability of generated power and an enhanced contribution of distributed power generation systems to power system stability...

  1. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    CERN Document Server

    Andrade, Pedro; Bhatt, Kislay; Chand, Phool; Collados, David; Duggal, Vibhuti; Fuente, Paloma; Hayashi, Soichi; Imamagic, Emir; Joshi, Pradyumna; Kalmady, Rajesh; Karnani, Urvashi; Kumar, Vaibhav; Lapka, Wojciech; Quick, Robert; Tarragon, Jacobo; Teige, Scott; Triantafyllidis, Christos

    2012-01-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO managers, service managers, management), from different middleware providers (ARC, dCache, gLite, UNICORE and VDT), consortiums (WLCG, EMI, EGI, OSG), and operational teams (GOC, OMB, OTAG, CSIRT). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG portal where it is exposed to other clients. This monitoring workflow profits from the i...

  2. Adaptive Data Collection Mechanisms for Smart Monitoring of Distribution Grids

    DEFF Research Database (Denmark)

    Kemal, Mohammed Seifu; Olsen, Rasmus Løvenstein

    2016-01-01

    units. For electric distribution systems, Information from Smart Meters can be utilized to monitor and control the state of the grid. Hence, it is indeed inherent that data from Smart Meters should be collected in a resilient, reliable, secure and timely manner fulfilling all the communication...... requirements and standards. This paper presents a proposal for smart data collection mechanisms to monitor electrical grids with adaptive smart metering infrastructures. A general overview of a platform is given for testing, evaluating and implementing mechanisms to adapt Smart Meter data aggregation. Three...... main aspects of adaptiveness of the system are studied, adaptiveness to smart metering application needs, adaptiveness to changing communication network dynamics and adaptiveness to security attacks. Execution of tests will be conducted in real field experimental set-up and in an advanced hardware...

  3. Performance monitoring of GRID superscalar with OCM-G/G-PM: Integration issues

    NARCIS (Netherlands)

    Badia, R.M.; Sirvent, R.; Bubak, M.; Funika, W.; Machner, P.; Gorlatch, S.; Bubak, M.; Priol, T.

    2008-01-01

    In this paper the use of a Grid-enabled system for performance monitoring of GRID superscalar-compliant applications is addressed. Performance monitoring is built on top of the OCM-G monitoring system developed in the EU IST CrossGrid project. A graphical user tool G-PM is used to interpret

  4. GStat 2.0: Grid Information System Status Monitoring

    International Nuclear Information System (INIS)

    Field, Laurence; Huang, Joanna; Tsai, Min

    2010-01-01

    Grid Information Systems are mission-critical components in today's production grid infrastructures. They enable users, applications and services to discover which services exist in the infrastructure and further information about the service structure and state. It is therefore important that the information system components themselves are functioning correctly and that the information content is reliable. Grid Status (GStat) is a tool that monitors the structural integrity of the EGEE information system, which is a hierarchical system built out of more than 260 site-level and approximately 70 global aggregation services. It also checks the information content and presents summary and history displays for Grid Operators and System Administrators. A major new version, GStat 2.0, aims to build on the production experience of GStat and provides additional functionality, which enables it to be extended and combined with other tools. This paper describes the new architecture used for GStat 2.0 and how it can be used at all levels to help provide a reliable information system.

  5. Grid site availability evaluation and monitoring at CMS

    Science.gov (United States)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.

  6. The CanonicalProducer: An Instrument Monitoring Component of the Relational Grid Monitoring Architecture (R-GMA

    Directory of Open Access Journals (Sweden)

    Stuart Kenny

    2005-01-01

    Full Text Available We describe how the R-GMA (Relational Grid Monitoring Architecture can be used to allow for instrument monitoring in a Grid environment. The R-GMA has been developed within the European DataGrid Project (EDG as a Grid Information and Monitoring System. It is based on the Grid Monitoring Architecture (GMA from the Global Grid Forum (GGF, which is a simple Consumer-Producer model. The special strength of this implementation comes from the power of the relational model. It offers a global view of the information as if each Virtual Organisation had one large relational database. It provides a number of different Producer types with different characteristics; for example some support streaming of information. We describe the R-GMA component that allows for instrument monitoring, the CanonicalProducer. We also describe an example use of this approach in the European CrossGrid project, SANTA-G, a network monitoring tool.

  7. Job monitoring on the WLCG scope: Current status and new strategy

    International Nuclear Information System (INIS)

    Andreeva, Julia; Casey, James; Gaidioz, Benjamin; Karavakis, Edward; Kokoszkiewicz, Lukasz; Lanciotti, Elisa; Maier, Gerhild; Rodrigues, Daniele Filipe Rocha Da Cuhna; Rocha, Ricardo; Saiz, Pablo; Sidorova, Irina; Boehm, Max; Belov, Sergey; Tikhonenko, Elena; Dvorak, Frantisek; Krenek, Ales; Mulac, Milas; Sitera, Jiri; Kodolova, Olga; Vaibhav, Kumar

    2010-01-01

    Job processing and data transfer are the main computing activities on the WLCG infrastructure. Reliable monitoring of the job processing on the WLCG scope is a complicated task due to the complexity of the infrastructure itself and the diversity of the currently used job submission methods. The paper will describe current status and the new strategy for the job monitoring on the WLCG scope, covering primary information sources, job status changes publishing, transport mechanism and visualization.

  8. HYBRIDIZATION OF MODIFIED ANT COLONY OPTIMIZATION AND INTELLIGENT WATER DROPS ALGORITHM FOR JOB SCHEDULING IN COMPUTATIONAL GRID

    Directory of Open Access Journals (Sweden)

    P. Mathiyalagan

    2013-10-01

    Full Text Available As grid is a heterogeneous environment, finding an optimal schedule for the job is always a complex task. In this paper, a hybridization technique using intelligent water drops and Ant colony optimization which are nature-inspired swarm intelligence approaches are used to find the best resource for the job. Intelligent water drops involves in finding out all matching resources for the job requirements and the routing information (optimal path to reach those resources. Ant Colony optimization chooses the best resource among all matching resources for the job. The objective of this approach is to converge to the optimal schedule faster, minimize the make span of the job, improve load balancing of resources and efficient utilization of available resources.

  9. The power grid monitoring promotion of Liaoning December 14th accident

    Science.gov (United States)

    Zhou, Zhi; Gao, Ziji; He, Xiaoyang; Li, Tie; Jin, Xiaoming; Wang, Mingkai; Qu, Zhi; Sun, Chenguang

    2018-02-01

    This paper introduces the main responsibilities of power grid monitoring and the accident of Liaoning Power Grid 500kV Xujia transformer substation at December 14th, 2016. This paper analyzes the problems exposed in this accident from the aspects of abnormal information judgment, fault information collection, auxiliary video monitoring, online monitoring of substation equipment, puts forward the corresponding improvement methods and summarizes the methods of improving the professional level of power grid equipment monitoring.

  10. Final Progress Report for 'An Abstract Job Handling Grid Service for Dataset Analysis'

    International Nuclear Information System (INIS)

    David A Alexander

    2005-01-01

    For Phase I of the Job Handling project, Tech-X has built a Grid service for processing analysis requests, as well as a Graphical User Interface (GUI) client that uses the service. The service is designed to generically support High-Energy Physics (HEP) experimental analysis tasks. It has an extensible, flexible, open architecture and language. The service uses the Solenoidal Tracker At RHIC (STAR) experiment as a working example. STAR is an experiment at the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory (BNL). STAR and other experiments at BNL generate multiple Petabytes of HEP data. The raw data is captured as millions of input files stored in a distributed data catalog. Potentially using thousands of files as input, analysis requests are submitted to a processing environment containing thousands of nodes. The Grid service provides a standard interface to the processing farm. It enables researchers to run large-scale, massively parallel analysis tasks, regardless of the computational resources available in their location

  11. Jobs

    DEFF Research Database (Denmark)

    Schubart, Rikke

    2013-01-01

    Review of the movie Jobs (Joshua Michael Stern, 2013), a drama about Steve Jobs, the founder of Apple.......Review of the movie Jobs (Joshua Michael Stern, 2013), a drama about Steve Jobs, the founder of Apple....

  12. Distributed monitoring for the prevention of cascading failures in operational power grids

    NARCIS (Netherlands)

    M.E. Warnier (Martijn); S.O. Dulman (Stefan); Y. Koç (Yakup); E.J. Pauwels (Eric)

    2017-01-01

    textabstractElectrical power grids are vulnerable to cascading failures that can lead to large blackouts. The detection and prevention of cascading failures in power grids are important problems. Currently, grid operators mainly monitor the states (loading levels) of individual components in a power

  13. SCALEA-G: A Unified Monitoring and Performance Analysis System for the Grid

    Directory of Open Access Journals (Sweden)

    Hong-Linh Truong

    2004-01-01

    Full Text Available This paper describes SCALEA-G, a unified monitoring and performance analysis system for the Grid. SCALEA-G is implemented as a set of grid services based on the Open Grid Services Architecture (OGSA. SCALEA-G provides an infrastructure for conducting online monitoring and performance analysis of a variety of Grid services including computational and network resources, and Grid applications. Both push and pull models are supported, providing flexible and scalable monitoring and performance analysis. Source code and dynamic instrumentation are implemented to perform profiling and monitoring of Grid applications. A novel instrumentation request language for dynamic instrumentation and a standardized intermediate representation for binary code have been developed to facilitate the interaction between client and instrumentation services.

  14. Grid3: An Application Grid Laboratory for Science

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    level services required by the participating experiments. The deployed infrastructure has been operating since November 2003 with 27 sites, a peak of 2800 processors, work loads from 10 different applications exceeding 1300 simultaneous jobs, and data transfers among sites of greater than 2 TB/day. The Grid3 infrastructure was deployed from grid level services provided by groups and applications within the collaboration. The services were organized into four distinct "grid level services" including: Grid3 Packaging, Monitoring and Information systems, User Authentication and the iGOC Grid Operatio...

  15. Research and design of smart grid monitoring control via terminal based on iOS system

    Science.gov (United States)

    Fu, Wei; Gong, Li; Chen, Heli; Pan, Guangji

    2017-06-01

    Aiming at a series of problems existing in current smart grid monitoring Control Terminal, such as high costs, poor portability, simple monitoring system, poor software extensions, low system reliability when transmitting information, single man-machine interface, poor security, etc., smart grid remote monitoring system based on the iOS system has been designed. The system interacts with smart grid server so that it can acquire grid data through WiFi/3G/4G networks, and monitor each grid line running status, as well as power plant equipment operating conditions. When it occurs an exception in the power plant, incident information can be sent to the user iOS terminal equipment timely, which will provide troubleshooting information to help the grid staff to make the right decisions in a timely manner, to avoid further accidents. Field tests have shown the system realizes the integrated grid monitoring functions, low maintenance cost, friendly interface, high security and reliability, and it possesses certain applicable value.

  16. Air Pollution Monitoring and Mining Based on Sensor Grid in London

    OpenAIRE

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-01-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We pr...

  17. Development of Energy Monitoring System for SmartGrid Consumer Application

    OpenAIRE

    Apse-Apsitis, Peteris; Avotins, Ansis; Ribickis, Leonids; Zakis, Janis

    2012-01-01

    Part 13: Energy Smart Grid; International audience; The number of electricity consuming equipment for existing household end-user is continuing to increase, and some residential buildings already consume more energy than existing building regulations prescribe. The uprising SmartGrid technology with alternative energy sources could be a key to solve this problem, but it is demanding also for a “smarter” consumer with ability to monitor and manage his loads. Such a monitoring system can also i...

  18. Monitoring the Earth System Grid Federation through the ESGF Dashboard

    Science.gov (United States)

    Fiore, S.; Bell, G. M.; Drach, B.; Williams, D.; Aloisio, G.

    2012-12-01

    The Climate Model Intercomparison Project, phase 5 (CMIP5) is a global effort coordinated by the World Climate Research Programme (WCRP) involving tens of modeling groups spanning 19 countries. It is expected the CMIP5 distributed data archive will total upwards of 3.5 petabytes, stored across several ESGF Nodes on four continents (North America, Europe, Asia, and Australia). The Earth System Grid Federation (ESGF) provides the IT infrastructure to support the CMIP5. In this regard, the monitoring of the distributed ESGF infrastructure represents a crucial part carried out by the ESGF Dashboard. The ESGF Dashboard is a software component of the ESGF stack, responsible for collecting key information about the status of the federation in terms of: 1) Network topology (peer-groups composition), 2) Node type (host/services mapping), 3) Registered users (including their Identity Providers), 4) System metrics (e.g., round-trip time, service availability, CPU, memory, disk, processes, etc.), 5) Download metrics (both at the Node and federation level). The last class of information is very important since it provides a strong insight of the CMIP5 experiment: the data usage statistics. In this regard, CMCC and LLNL have developed a data analytics management system for the analysis of both node-level and federation-level data usage statistics. It provides data usage statistics aggregated by project, model, experiment, variable, realm, peer node, time, ensemble, datasetname (including version), etc. The back-end of the system is able to infer the data usage information of the entire federation, by carrying out: - at node level: a 18-step reconciliation process on the peer node databases (i.e. node manager and publisher DB) which provides a 15-dimension datawarehouse with local statistics and - at global level: an aggregation process which federates the data usage statistics into a 16-dimension datawarehouse with federation-level data usage statistics. The front-end of the

  19. Grid Monitoring and Advanced Control of Distributed Power Generation Systems

    OpenAIRE

    Timbus, Adrian Vasile

    2007-01-01

    The movement towards a clean technology for energy production and the constraints in reducing the CO2 emissions are some factors facilitating the growth of distributed power generation systems based on renewable energy resources. Consequently, large penetration of distributed generators has been reported in some countries creating concerns about power system stability. This leads to a continuous evolution of grid interconnection requirements towards a better controllability of generated power...

  20. Disaster Monitoring using Grid Based Data Fusion Algorithms

    Directory of Open Access Journals (Sweden)

    Cătălin NAE

    2010-12-01

    Full Text Available This is a study of the application of Grid technology and high performance parallelcomputing to a candidate algorithm for jointly accomplishing data fusion from different sensors. Thisincludes applications for both image analysis and/or data processing for simultaneously trackingmultiple targets in real-time. The emphasis is on comparing the architectures of the serial andparallel algorithms, and characterizing the performance benefits achieved by the parallel algorithmwith both on-ground and in-space hardware implementations. The improved performance levelsachieved by the use of Grid technology (middleware for Parallel Data Fusion are presented for themain metrics of interest in near real-time applications, namely latency, total computation load, andtotal sustainable throughput. The objective of this analysis is, therefore, to demonstrate animplementation of multi-sensor data fusion and/or multi-target tracking functions within an integratedmulti-node portable HPC architecture based on emerging Grid technology. The key metrics to bedetermined in support of ongoing system analyses includes: required computational throughput inMFLOPS; latency between receipt of input data and resulting outputs; and scalability, processorutilization and memory requirements. Furthermore, the standard MPI functions are considered to beused for inter-node communications in order to promote code portability across multiple HPCcomputer platforms, both in space and on-ground.

  1. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    ate, quote the cost for using the resources. Give time estimate for completing the job. (This may be one of the QoS parameters). Schedule user's job in .... availability of Globus software tools motivated large enterprises to mobilize their resources to create an enterprise grid. An enterprise grid is designed to interconnect all ...

  2. Monitoring and optimization of ATLAS Tier 2 center GoeGrid

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219638; Quadt, Arnulf; Yahyapour, Ramin

    The demand on computational and storage resources is growing along with the amount of information that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid performance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and serviceability of the computing platform can be measured according to the const...

  3. ATLAS off-Grid sites (Tier-3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    Science.gov (United States)

    Petrosyan, Artem; Oleynik, Danila; Belov, Sergey; Andreeva, Julia; Kadochnikov, Ivan

    2012-12-01

    ATLAS is an LHC (Large Hadron Collider) experiment at the CERN particle physics laboratory in Geneva, Switzerland. The ATLAS Computing model embraces the Grid paradigm and originally included three levels of computing centers, in order to handle data volumes of multiple petabytes per year. With the formation of small computing centers, usually based at universities, the model was expanded to include them as Tier-3 sites. Tier-3 centers comprise a range of architectures and many do not possess Grid middleware, thus, monitoring of storage usage and analysis software is not possible for the typical Tier-3 site system administrator, similarly, Tier-3 site activity is not available for the virtual organization of the experiment. In this paper an ATLAS off-Grid site-monitoring software suite is presented. The software suite enables monitoring of sites not covered by the ATLAS Distributed Computing software.

  4. ATLAS Off-Grid sites (Tier-3) monitoring

    CERN Document Server

    Petrosyan, A S; The ATLAS collaboration

    2012-01-01

    ATLAS is a particle physics experiment on Large Hadron Collider at CERN. The experiment produces petabytes of data every year. The ATLAS Computing model embraces the Grid paradigm and originally included three levels of computing centers to be able to operate such large volume of data. The ATLAS Distributed Computing activities concentrated so far in the “central” part of the computing system of the experiment, namely the first 3 tiers (CERN Tier-0, the 10 Tier-1s centers and about 50 Tier-2s). This is a coherent system to perform data processing and management on a global scale and host (re)processing, simulation activities down to group and user analysis. With the formation of small computing centers, usually based at universities, the model was expanded to include them as Tier-3 sites. Tier-3 centers consist of non-pledged resources mostly dedicated for the data analysis by the geographically close or local scientific groups. The experiment supplies all necessary software to operate typical Grid-site, ...

  5. Harmonizing electricity markets with physics : real time performance monitoring using grid-3PTM

    International Nuclear Information System (INIS)

    Budhraja, V.S.

    2003-01-01

    The Electric Power Group, LLC provides management and strategic consulting services for the electric power industry, with special emphasis on industry restructuring, competitive electricity markets, grid operations and reliability, power technologies, venture investments and start-ups. The Consortium for Electric Reliability Technology Solutions involves national laboratories, universities, and industry partners in researching, developing, and commercializing electric reliability technology solutions to protect and enhance the reliability of the American electric power system under the emerging competitive electricity market structure. Physics differentiate electric markets from other markets: there is real-time balancing, no storage, interconnected network, and power flows governed by physics. Some issues affecting both grid reliability and market issues are difficult to separate, such as security and congestion management, voltage management, reserves, frequency volatility, and others. The author examined the following investment challenges facing the electricity market: grid solutions, market solutions, and technology solutions. The real time performance monitoring and prediction platform, grid-3P was described and applications discussed, such as ACE-frequency monitoring, performance monitoring for automatic generation control (AGC) and frequency response, voltage/VAR monitoring, stability monitoring using phasor technology, and market monitoring. figs

  6. ATLAS off-Grid sites (Tier 3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    CERN Document Server

    PETROSYAN, A; The ATLAS collaboration; BELOV, S; ANDREEVA, J; KADOCHNIKOV, I

    2012-01-01

    ATLAS is a particle physics experiment on Large Hadron Collider at CERN. The experiment produces petabytes of data every year. The ATLAS Computing model embraces the Grid paradigm and originally included three levels of computing centres to be able to operate such large volume of data. With the formation of small computing centres, usually based at universities, the model was expanded to include them as Tier3 sites. The experiment supplies all necessary software to operate typical Grid-site, but Tier3 sites do not support Grid services of the experiment or support them partially. Tier3 centres comprise a range of architectures and many do not possess Grid middleware, thus, monitoring of storage and analysis software used on Tier2 sites becomes unavailable for Tier3 site system administrator and, also, Tier3 sites activity becomes unavailable for virtual organization of the experiment. In this paper we present ATLAS off-Grid sites monitoring software suite, which enables monitoring on sites, which are not unde...

  7. Monitoring the grid with the Globus Toolkit MDS4

    International Nuclear Information System (INIS)

    Schopf, Jennifer M; Pearlman, Laura; Miller, Neill; Kesselman, Carl; Foster, Ian; D'Arcy, Mike; Chervenak, Ann

    2006-01-01

    The Globus Toolkit Monitoring and Discovery System (MDS4) defines and implements mechanisms for service and resource discovery and monitoring in distributed environments. MDS4 is distinguished from previous similar systems by its extensive use of interfaces and behaviors defined in the WS-Resource Framework and WS-Notification specifications, and by its deep integration into essentially every component of the Globus Toolkit. We describe the MDS4 architecture and the Web service interfaces and behaviors that allow users to discover resources and services, monitor resource and service states, receive updates on current status, and visualize monitoring results. We present two current deployments to provide insights into the functionality that can be achieved via the use of these mechanisms

  8. Computer-Aided Monitoring: Its Influence on Employee Job Satisfaction and Turnover.

    Science.gov (United States)

    Chalykoff, John; Kochan, Thomas A.

    1989-01-01

    Developed model for examining impact of computer-aided monitoring on employee-level job satisfaction and turnover propensity. Results from 740 employees showed that, for some employees, negative effects of monitoring were inherent; for others, its negative impact could be mitigated by attention to feedback/performance appraisal processes.…

  9. How to keep the Grid full and working with ATLAS production and physics jobs

    CERN Document Server

    Pacheco Pages, Andres; The ATLAS collaboration; Di Girolamo, Alessandro; Walker, Rodney; Filip\\v{c}i\\v{c}, Andrej; Cameron, David; Yang, Wei; Fassi, Farida; Glushkov, Ivan

    2016-01-01

    The ATLAS production system has provided the infrastructure to process of tens of thousand of events during LHC Run1 and the first years of the LHC Run2 using grid, clouds and high performance computing. We address in this contribution several strategies and improvements added to the production system to optimize its performance to get the maximum efficiency of available resources from operational perspective and focusing in detail in the recent developments

  10. How to keep the Grid full and working with ATLAS production and physics jobs

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00221495; The ATLAS collaboration; Barreiro Megino, Fernando Harald; Cameron, David; Fassi, Farida; Filipcic, Andrej; Di Girolamo, Alessandro; Gonzalez de la Hoz, Santiago; Glushkov, Ivan; Maeno, Tadashi; Walker, Rodney; Yang, Wei

    2016-01-01

    The ATLAS production system provides the infrastructure to process millions of events collected during the LHC Run 1 and the first two years of Run 2 using grid, clouds and high performance computing. We address in this contribution the strategies and improvements that have been implemented to the production system for optimal performance and to achieve the highest efficiency of available resources from operational perspective. We focus on the recent developments.

  11. Air Pollution Monitoring and Mining Based on Sensor Grid in London.

    Science.gov (United States)

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-06-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We present a twolayer network framework, a P2P e-Science Grid architecture, and the distributed data mining algorithm as the solutions to address the challenges. We simulated the system in TinyOS to examine the operation of each sensor as well as the networking performance. We also present the distributed data mining result to examine the effectiveness of the algorithm.

  12. Air Pollution Monitoring and Mining Based on Sensor Grid in London

    Directory of Open Access Journals (Sweden)

    John Hassard

    2008-06-01

    Full Text Available In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We present a twolayer network framework, a P2P e-Science Grid architecture, and the distributed data mining algorithm as the solutions to address the challenges. We simulated the system in TinyOS to examine the operation of each sensor as well as the networking performance. We also present the distributed data mining result to examine the effectiveness of the algorithm.

  13. Air Pollution Monitoring and Mining Based on Sensor Grid in London

    Science.gov (United States)

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-01-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We present a two-layer network framework, a P2P e-Science Grid architecture, and the distributed data mining algorithm as the solutions to address the challenges. We simulated the system in TinyOS to examine the operation of each sensor as well as the networking performance. We also present the distributed data mining result to examine the effectiveness of the algorithm. PMID:27879895

  14. A Data Transmission Algorithm Based on Dynamic Grid Division for Coal Goaf Temperature Monitoring

    Directory of Open Access Journals (Sweden)

    Qingsong Hu

    2014-01-01

    Full Text Available WSN (wireless sensor network is a perfect tool of temperature monitoring in coal goaf. Based on the three-zone theory of goaf, the GtmWSN model is proposed, and its dynamic features are analyzed. Accordingly, a data transmission scheme, named DTDGD, is worked out. Firstly, sink nodes conduct dynamic grid division on the GtmWSN according to virtual semicircle. Secondly, each node will confirm to which grid it belongs based on grid number. Finally, data will be delivered to sink nodes with greedy forward and hole avoidance. Simulation results and field data showed that the GtmWSN and DTDGD satisfied the lifetime need of goaf temperature monitoring.

  15. Hierarchical Data Replication and Service Monitoring Methods in a Scientific Data Grid

    Directory of Open Access Journals (Sweden)

    Weizhong Lu

    2009-04-01

    Full Text Available In a grid and distributed computing environment, data replication is an effective way to improve data accessibility and data accessing efficiency. It is also significant in developing a real-time service monitoring system for a Chinese Scientific Data Grid to guarantee the system stability and data availability. Hierarchical data replication and service monitoring methods are proposed in this paper. The hierarchical data replication method divides the network into different domains and replicates data in local domains. The nodes in a local domain are classified into hierarchies to improve data accessibility according to bandwidth and storage memory space. An extensible agent-based prototype of a hierarchical service monitoring system is presented. The status information of services in the Chinese Scientific Data Grid is collected from the grid nodes based on agent technology and then is transformed into real-time operational pictures for management needs. This paper presents frameworks of the hierarchical data replication and service monitoring methods and gives detailed resolutions. Simulation analyses have demonstrated improved data accessing efficiency and verified the effectiveness of the methods at the same time.

  16. Monitoring and remote failure detection of grid-connected PV systems based on satellite observations

    NARCIS (Netherlands)

    Drews, A.; de Keizer, A.C.; Beyer, H.G.; Lorenz, E.; Betcke, J.W.H.; van Sark, W.G.J.H.M.; Heydenreich, W.; Wiemken, E.; Stettler, S.; Toggweiler, P.; Bofinger, S.; Schneider, M.; Heilscher, G.; Heinemann, D.

    Small grid-connected photovoltaic systems up to 5 kWp are often not monitored because advanced surveillance systems are not economical. Hence, some system failures which lead to partial energy losses stay unnoticed for a long time. Even a failure that results in a larger energy deficit can be

  17. Sensor Networks and Grid Middleware for Laboratory Monitoring

    OpenAIRE

    Frey, Jeremy G.; Robinson, Jamie; Stanford-Clark, Andrew; Reynolds, Andrew; Bendi, Bharat

    2005-01-01

    By combining automatic environment sensing and experimental data collection with broker based messaging middleware, a system has been produced for the real-time monitoring of experiments whilst away from the lab. Changes in the laboratory environment are encapsulated as simple messages, which are published using an MQTT compliant broker. Clients subscribe to the MQTT stream, and perform a data transform on the messages; this may be to produce a user display or to change the format of the m...

  18. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.

    Science.gov (United States)

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.

  19. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter

    Directory of Open Access Journals (Sweden)

    Shyamala Loganathan

    2015-01-01

    Full Text Available Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.

  20. System performance of a three-phase PV-grid-connected system installed in Thailand. Data monitored analysis

    International Nuclear Information System (INIS)

    Boonmee, Chaiyant; Watjanatepin, Napat; Plangklang, Boonyang

    2009-01-01

    PV-grid-connected systems are worldwide installed because it allows consumer to reduce energy consumption from the electricity grid and to feed the surplus energy back into the grid. The system needs no battery so therefore the system price is very cheap comparing to other PV systems. PV-grid-connected systems are used in buildings that already hooked up to the electrical grid. Finding efficiency of the PV-grid-connected system can be done by using a standard instrument which needs to disconnect the PV arrays from the grid before measurement. The measurement is also difficult and we lose energy during the measurement. This paper will present the system performance of a PV-grid-connected system installed in Thailand by using a monitoring system. The monitored data are installed by acquisition software into a computer. Analysis of monitored data will be done to find out the system performance without disconnecting the PV arrays from the system. The monitored data include solar radiation, PV voltage, PV current, and PV power which has been recorded from a 5 kWp system installed of amorphous silicon PV at Rajamangala University of Technology Suvarnabhumi, Nonthaburi, Thailand. The system performance of the system by using the data monitored is compared to the standard instrument measurement. The paper will give all details about system components, monitoring system, and monitored data. The result of data analysis will be fully given. (author)

  1. Secure Real-Time Monitoring and Management of Smart Distribution Grid using Shared Cellular Networks

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Ganem, Hervé; Jorguseski, Ljupco

    2017-01-01

    capabilities. Thanks to the advanced measurement devices, management framework, and secure communication infrastructure developed in the FP7 SUNSEED project, the Distribution System Operator (DSO) now has full observability of the energy flows at the medium/low voltage grid. Furthermore, the prosumers are able...... to participate pro-actively and coordinate with the DSO and other stakeholders in the grid. The monitoring and management functionalities have strong requirements to the communication latency, reliability and security. This paper presents novel solutions and analyses of these aspects for the SUNSEED scenario...

  2. Progress on mobility and instability of research personnel in Japan: scientometrics on a job-posting database for monitoring the academic job market

    Energy Technology Data Exchange (ETDEWEB)

    Kawashima, H.; Yamashita, Y.

    2016-07-01

    This study has two purposes. The first purpose is to extract statistics from a database of jobposting cards, previously little-used as a data source, to assess the academic job market. The second purpose is to connect statistics on the academic job market with monitoring of indicators of policy progress related to the mobility and instability of research personnel. The data source used in this study is a job-posting database named JREC-IN Portal, which is the de facto standard for academic job seeking in Japan. The present results show a growing proportion of fixed-term researchers in the Japanese academic job market and that job information is increasingly diverse. (Author)

  3. Final Report - "UCM-Grid Service for User-Centric Monitoring"

    Energy Technology Data Exchange (ETDEWEB)

    David A Alexander

    2009-11-12

    The User Centric Monitoring (UCM) project was aimed at developing a toolkit that provides the Virtual Organization (VO) with tools to build systems that serve a rich set of intuitive job and application monitoring information to the VO's scientists so that they can be more productive. The tools help collect and serve the status and error information through a Web interface. The proposed UCM toolkit is composed of a set of library functions, a database schema, and a Web portal that will collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than and administrative-centric point of view.

  4. Monitoring of the electrical parameters in off-grid solar power system

    Science.gov (United States)

    Idzkowski, Adam; Leoniuk, Katarzyna; Walendziuk, Wojciech

    2016-09-01

    The aim of this work was to make a monitoring dedicated to an off-grid installation. A laboratory set, which was built for that purpose, was equipped with a PV panel, a battery, a charge controller and a load. Additionally, to monitor electrical parameters from this installation there were used: LabJack module (data acquisition card), measuring module (self-built) and a computer with a program, which allows to measure and present the off-grid installation parameters. The program was made in G language using LabVIEW software. The designed system enables analyzing the currents and voltages of PV panel, battery and load. It makes also possible to visualize them on charts and to make reports from registered data. The monitoring system was also verified by a laboratory test and in real conditions. The results of this verification are also presented.

  5. Research of Smart Grid Cyber Architecture and Standards Deployment with High Adaptability for Security Monitoring

    DEFF Research Database (Denmark)

    Hu, Rui; Hu, Weihao; Chen, Zhe

    2015-01-01

    Security Monitoring is a critical function for smart grid. As a consequence of strongly relying on communication, cyber security must be guaranteed by the specific system. Otherwise, the DR signals and bidding information can be easily forged or intercepted. Customers’ privacy and safety may suffer...... huge losses. Although OpenADR specificationsprovide continuous, secure and reliable two-way communications in application level defined in ISO model, which is also an open architecture for security is adopted by it and no specific or proprietary technologies is restricted to OpenADR itself....... It is significant to develop a security monitoring system. This paper discussed the cyber architecture of smart grid with high adaptability for security monitoring. An adaptable structure with Demilitarized Zone (DMZ) is proposed. Focusing on this network structure, the rational utilization of standards...

  6. Job monitoring on DIRAC for Belle II distributed computing

    Science.gov (United States)

    Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo

    2015-12-01

    We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.

  7. Monitoring and Resource Management in P2P Grid-Based Web Services

    Directory of Open Access Journals (Sweden)

    Djafri Laouni

    2012-12-01

    Full Text Available Grid computing has recently emerged as a response to the growing demand for resources (processing power, storage, etc. exhibited by scientific applications. However, as grid sizes increase, the need for self-organization and dynamic reconfigurations is becoming more and more important. Since such properties are exhibited by P2P systems, the convergence of grid computing and P2P computing seems natural. However, using P2P systems (usually running on the Internet on a grid infrastructure (generally available as a federation of SAN-based clusters interconnected by high-bandwidth WANs may raise the issue of the adequacy of the P2P communication mechanisms. Among the interesting properties of P2P systems is the  volatility of  peers  which  causes  the  need  for  integration  of  a  service fault tolerance. And service Load balancing,   As a solution, we proposed a mechanism of fault tolerance and model of Load balancing  adapted to a grid P2P model, named SGRTE (Monitoring and Resource Management, Fault Tolerances and Load Balancing.

  8. Fieldservers and Sensor Service Grid as Real-time Monitoring Infrastructure for Ubiquitous Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hiroshi Shimamura

    2009-03-01

    Full Text Available The fieldserver is an Internet based observation robot that can provide an outdoor solution for monitoring environmental parameters in real-time. The data from its sensors can be collected to a central server infrastructure and published on the Internet. The information from the sensor network will contribute to monitoring and modeling on various environmental issues in Asia, including agriculture, food, pollution, disaster, climate change etc. An initiative called Sensor Asia is developing an infrastructure called Sensor Service Grid (SSG, which integrates fieldservers and Web GIS to realize easy and low cost installation and operation of ubiquitous field sensor networks.

  9. Fieldservers and Sensor Service Grid as Real-time Monitoring Infrastructure for Ubiquitous Sensor Networks.

    Science.gov (United States)

    Honda, Kiyoshi; Shrestha, Aadit; Witayangkurn, Apichon; Chinnachodteeranun, Rassarin; Shimamura, Hiroshi

    2009-01-01

    The fieldserver is an Internet based observation robot that can provide an outdoor solution for monitoring environmental parameters in real-time. The data from its sensors can be collected to a central server infrastructure and published on the Internet. The information from the sensor network will contribute to monitoring and modeling on various environmental issues in Asia, including agriculture, food, pollution, disaster, climate change etc. An initiative called Sensor Asia is developing an infrastructure called Sensor Service Grid (SSG), which integrates fieldservers and Web GIS to realize easy and low cost installation and operation of ubiquitous field sensor networks.

  10. Cyber-physical security of Wide-Area Monitoring, Protection and Control in a smart grid environment

    Science.gov (United States)

    Ashok, Aditya; Hahn, Adam; Govindarasu, Manimaran

    2013-01-01

    Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation’s electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments. PMID:25685516

  11. Cyber-physical security of Wide-Area Monitoring, Protection and Control in a smart grid environment.

    Science.gov (United States)

    Ashok, Aditya; Hahn, Adam; Govindarasu, Manimaran

    2014-07-01

    Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation's electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments.

  12. Integration of Grid and Sensor Web for Flood Monitoring and Risk Assessment from Heterogeneous Data

    Science.gov (United States)

    Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii

    2013-04-01

    Over last decades we have witnessed the upward global trend in natural disaster occurrence. Hydrological and meteorological disasters such as floods are the main contributors to this pattern. In recent years flood management has shifted from protection against floods to managing the risks of floods (the European Flood risk directive). In order to enable operational flood monitoring and assessment of flood risk, it is required to provide an infrastructure with standardized interfaces and services. Grid and Sensor Web can meet these requirements. In this paper we present a general approach to flood monitoring and risk assessment based on heterogeneous geospatial data acquired from multiple sources. To enable operational flood risk assessment integration of Grid and Sensor Web approaches is proposed [1]. Grid represents a distributed environment that integrates heterogeneous computing and storage resources administrated by multiple organizations. SensorWeb is an emerging paradigm for integrating heterogeneous satellite and in situ sensors and data systems into a common informational infrastructure that produces products on demand. The basic Sensor Web functionality includes sensor discovery, triggering events by observed or predicted conditions, remote data access and processing capabilities to generate and deliver data products. Sensor Web is governed by the set of standards, called Sensor Web Enablement (SWE), developed by the Open Geospatial Consortium (OGC). Different practical issues regarding integration of Sensor Web with Grids are discussed in the study. We show how the Sensor Web can benefit from using Grids and vice versa. For example, Sensor Web services such as SOS, SPS and SAS can benefit from the integration with the Grid platform like Globus Toolkit. The proposed approach is implemented within the Sensor Web framework for flood monitoring and risk assessment, and a case-study of exploiting this framework, namely the Namibia SensorWeb Pilot Project, is

  13. Indicator of reliability of power grids and networks for environmental monitoring

    Science.gov (United States)

    Shaptsev, V. A.

    2017-10-01

    The energy supply of the mining enterprises includes power networks in particular. Environmental monitoring relies on the data network between the observers and the facilitators. Weather and conditions of their work change over time randomly. Temperature, humidity, wind strength and other stochastic processes are interconnecting in different segments of the power grid. The article presents analytical expressions for the probability of failure of the power grid as a whole or its particular segment. These expressions can contain one or more parameters of the operating conditions, simulated by Monte Carlo. In some cases, one can get the ultimate mathematical formula for calculation on the computer. In conclusion, the expression, including the probability characteristic function of one random parameter, for example, wind, temperature or humidity, is given. The parameters of this characteristic function can be given by retrospective or special observations (measurements).

  14. An earth-gridded SSM/I data set for cryospheric studies and global change monitoring

    Science.gov (United States)

    Armstrong, R. L.; Brodzik, M. J.

    1995-08-01

    The National Snow and Ice Data Center (NSIDC) has distributed DMSP Special Sensor Microwave Imager (SSM/I) brightness temperature grids for the Polar Regions on CD-ROM since 1987. In order to expand this product to include all potential snow covered regions, the area of coverage is now global. The format for the global SSM/I data set is the Equal Area SSM/I Earth Grid (EASE-Grid) developed at NSIDC. The EASE-Grid has been selected as the format for the NASA/NOAA Pathfinder Program Level 3 Products which include both SSM/I and SMMR (Scanning Multichannel Microwave Radiometer) data (1978-1987). Providing both data sets in the EASE-Grid will result in a 15 year time-series of satellite passive microwave data in a common format. The extent and variability of seasonal snow cover is recognized to be an important parameter in climate and hydrologic systems and trends in snow cover serve as an indicator of global climatic changes. Passive microwave data from satellites afford the possibility to monitor temporal and spatial variations in snow cover on the global scale, avoiding the problems of cloud cover and darkness. NSIDC is developing the capability to produce daily snow products from the DMSP-SSM/I satellite with a spatial resolution of 25 km. In order to provide a standard environment in which to validate SSM/I algorithm output, it is necessary to assemble baseline data sets using other, more direct, methods of measurement. NSIDC has compiled a validation data set of surface station measurements for the northern hemisphere with specific focus on the United States, Canada, and the former Soviet Union. Digital image subtraction is applied to compare the surface station and satellite measurements.

  15. Information Quality Aware Data Collection for Adaptive Monitoring of Distribution Grids

    DEFF Research Database (Denmark)

    Kemal, Mohammed Seifu; Olsen, Rasmus Løvenstein; Schwefel, Hans-Peter

    2017-01-01

    Abstract. Information from existing smart metering infrastructure, mainly used for billing purposes can also be utilised to monitor and control state of the grid. To add functionalities such as fault detection and real-time state estimation, data from smart meters should be accessed with increased...... be utilised for adaptation, a two-layer smart meter data access infrastructure is presented. An information quality metric, Mismatch Probability (mmPr) is introduced for the quantitative analysis of the two-layer data access system implemented in MATLAB based discrete event simulation study....

  16. ATLAS off-Grid sites (Tier 3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    CERN Document Server

    PETROSYAN, A; The ATLAS collaboration; BELOV, S; ANDREEVA, J; KADOCHNIKOV, I

    2012-01-01

    The ATLAS Distributed Computing activities have so far concentrated in the "central" part of the experiment computing system, namely the first 3 tiers (the CERN Tier0, 10 Tier1 centers and over 60 Tier2 sites). Many ATLAS Institutes and National Communities have deployed (or intend to) deploy Tier-3 facilities. Tier-3 centers consist of non-pledged resources, which are usually dedicated to data analysis tasks by the geographically close or local scientific groups, and which usually comprise a range of architectures without Grid middleware. Therefore a substantial part of the ATLAS monitoring tools which make use of Grid middleware, cannot be used for a large fraction of Tier3 sites. The presentation will describe the T3mon project, which aims to develop a software suite for monitoring the Tier3 sites, both from the perspective of the local site administrator and that of the ATLAS VO, thereby enabling the global view of the contribution from Tier3 sites to the ATLAS computing activities. Special attention in p...

  17. Acoustic wave simulation using an overset grid for the global monitoring system

    Science.gov (United States)

    Kushida, N.; Le Bras, R.

    2017-12-01

    The International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been monitoring hydro-acoustic and infrasound waves over the globe. Because of the complex natures of the oceans and the atmosphere, computer simulation can play an important role in understanding the observed signals. In this regard, methods which depend on partial differential equations and require minimum modelling, are preferable. So far, to our best knowledge, acoustic wave propagation simulations based on partial differential equations on such a large scale have not been performed (pp 147 - 161 of ref [1], [2]). The main difficulties in building such simulation codes are: (1) considering the inhomogeneity of medium including background flows, (2) high aspect ratio of computational domain, (3) stability during long time integration. To overcome these difficulties, we employ a two-dimensional finite different (FDM) scheme on spherical coordinates with the Yin-Yang overset grid[3] solving the governing equation of acoustic waves introduces by Ostashev et. al.[4]. The comparison with real recording examples in hydro-acoustic will be presented at the conference. [1] Paul C. Etter: Underwater Acoustic Modeling and Simulation, Fourth Edition, CRC Press, 2013. [2] LIAN WANG et. al.: REVIEW OF UNDERWATER ACOUSTIC PROPAGATION MODELS, NPL Report AC 12, 2014. [3] A. Kageyama and T. Sato: "Yin-Yang grid": An overset grid in spherical geometry, Geochem. Geophys. Geosyst., 5, Q09005, 2004. [4] Vladimir E. Ostashev et. al: Equations for finite-difference, time-domain simulation of sound propagation in moving inhomogeneous media and numerical implementation, Acoustical Society of America. DOI: 10.1121/1.1841531, 2005.

  18. High-Speed Monitoring of Multiple Grid-Connected Photovoltaic Array Configurations and Supplementary Weather Station.

    Science.gov (United States)

    Boyd, Matthew T

    2017-06-01

    Three grid-connected monocrystalline silicon photovoltaic arrays have been instrumented with research-grade sensors on the Gaithersburg, MD campus of the National Institute of Standards and Technology (NIST). These arrays range from 73 kW to 271 kW and have different tilts, orientations, and configurations. Irradiance, temperature, wind, and electrical measurements at the arrays are recorded, and images are taken of the arrays to monitor shading and capture any anomalies. A weather station has also been constructed that includes research-grade instrumentation to measure all standard meteorological quantities plus additional solar irradiance spectral bands, full spectrum curves, and directional components using multiple irradiance sensor technologies. Reference photovoltaic (PV) modules are also monitored to provide comprehensive baseline measurements for the PV arrays. Images of the whole sky are captured, along with images of the instrumentation and reference modules to document any obstructions or anomalies. Nearly, all measurements at the arrays and weather station are sampled and saved every 1s, with monitoring having started on Aug. 1, 2014. This report describes the instrumentation approach used to monitor the performance of these photovoltaic systems, measure the meteorological quantities, and acquire the images for use in PV performance and weather monitoring and computer model validation.

  19. Comprehensive automation and monitoring of MV grids as the key element of improvement of energy supply reliability and continuity

    Directory of Open Access Journals (Sweden)

    Stanisław Kubacki

    2012-03-01

    Full Text Available The paper presents the issue of comprehensive automation and monitoring of medium voltage (MV grids as a key element of the Smart Grid concept. The existing condition of MV grid control and monitoring is discussed, and the concept of a solution which will provide the possibility of remote automatic grid reconfiguration and ensure full grid observability from the dispatching system level is introduced. Automation of MV grid switching is discussed in detail to isolate a faulty line section and supply electricity at the time of the failure to the largest possible number of recipients. An example of such automation controls’ operation is also presented. The paper’s second part presents the key role of the quick fault location function and the possibility of the MV grid’s remote reconfiguration for improving power supply reliability (SAIDI and SAIFI indices. It is also shown how an increase in the number of points fitted with faulted circuit indicators with the option of remote control of switches from the dispatch system in MV grids may affect reduction of SAIDI and SAIFI indices across ENERGA-OPERATOR SA divisions.

  20. ENERGY CONSUMPTION MONITORING OF SMART GRID BASED ON SEMANTIC STREAM DATA ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. A. Kolchin

    2015-03-01

    Full Text Available Problem statement. Currently, the task of improving energy efficiency are addressed mainly through the creation of more efficient devices and appliances, the use of alternative energy sources, application of special additional equipment for power consumption control and other technological methods. All these solutions are quite expensive and often economically difficult to payback. At the same time, the issues of automated integrated analysis of existing data measuring equipment have been poorly known. But just these data contain all the necessary information for finding bottlenecks and failures in the equipment, leading to increased energy consumption. Methods. Methods of web services creation are considered for current state monitoring of electrical networks using CQELS for static and streaming data integration of smart meters. RDF data model is used as the main way of data representation. Results. The architecture of the energy monitoring system (Smart grid based on semantic analysis of the streaming data is proposed. Ontology has been worked out, aimed at information domain model creation, which describes the measurement data and the possible situations for tracking by the system using semantic queries. An example of system operation is shown, and description of the visualization interfaces for streaming data and log of messages is given. Practical relevance. Industrial application of the proposed approach will give the possibility to achieve significant energy efficiency through integrated analysis of smart meters data based on existing infrastructure of test and measurement equipment. An additional effect lies in the ability to create flexible Smart grid monitoring system and visualization of their states by an ontological approach to the domain modeling.

  1. Design of an automatic production monitoring system on job shop manufacturing

    Science.gov (United States)

    Prasetyo, Hoedi; Sugiarto, Yohanes; Rosyidi, Cucuk Nur

    2018-02-01

    Every production process requires monitoring system, so the desired efficiency and productivity can be monitored at any time. This system is also needed in the job shop type of manufacturing which is mainly influenced by the manufacturing lead time. Processing time is one of the factors that affect the manufacturing lead time. In a conventional company, the recording of processing time is done manually by the operator on a sheet of paper. This method is prone to errors. This paper aims to overcome this problem by creating a system which is able to record and monitor the processing time automatically. The solution is realized by utilizing electric current sensor, barcode, RFID, wireless network and windows-based application. An automatic monitoring device is attached to the production machine. It is equipped with a touch screen-LCD so that the operator can use it easily. Operator identity is recorded through RFID which is embedded in his ID card. The workpiece data are collected from the database by scanning the barcode listed on its monitoring sheet. A sensor is mounted on the machine to measure the actual machining time. The system's outputs are actual processing time and machine's capacity information. This system is connected wirelessly to a workshop planning application belongs to the firm. Test results indicated that all functions of the system can run properly. This system successfully enables supervisors, PPIC or higher level management staffs to monitor the processing time quickly with a better accuracy.

  2. Analyzing data flows of WLCG jobs at batch job level

    Science.gov (United States)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-05-01

    With the introduction of federated data access to the workflows of WLCG, it is becoming increasingly important for data centers to understand specific data flows regarding storage element accesses, firewall configurations, as well as the scheduling of batch jobs themselves. As existing batch system monitoring and related system monitoring tools do not support measurements at batch job level, a new tool has been developed and put into operation at the GridKa Tier 1 center for monitoring continuous data streams and characteristics of WLCG jobs and pilots. Long term measurements and data collection are in progress. These measurements already have been proven to be useful analyzing misbehaviors and various issues. Therefore we aim for an automated, realtime approach for anomaly detection. As a requirement, prototypes for standard workflows have to be examined. Based on measurements of several months, different features of HEP jobs are evaluated regarding their effectiveness for data mining approaches to identify these common workflows. The paper will introduce the actual measurement approach and statistics as well as the general concept and first results classifying different HEP job workflows derived from the measurements at GridKa.

  3. Monitoring of a micro-smart grid: Power consumption data of some machineries of an agro-industrial test site.

    Science.gov (United States)

    Fabrizio, Enrico; Biglia, Alessandro; Branciforti, Valeria; Filippi, Marco; Barbero, Silvia; Tecco, Giuseppe; Mollo, Paolo; Molino, Andrea

    2017-02-01

    For the management of a (micro)-smart grid it is important to know the patters of the load profiles and of the generators. In this article the power consumption data obtained through a monitoring activity developed on a micro-smart grid in an agro-industrial test-site are presented. In particular, this reports the synthesis of the monitoring results of 5 loads (5 industrial machineries for crop micronization, corncob crashing and other similar processes). How these data were used within a monitoring and managing scheme of a micro-smart grid can be found in (E. Fabrizio, V. Branciforti, A. Costantino, M. Filippi, S. Barbero, G. Tecco, P. Mollo, A. Molino, 2017) [1]. The data can be useful for other researchers in order to create benchmarks of energy use input appropriate energy demand values in optimization tools for the industrial sector.

  4. Monitoring of a micro-smart grid: Power consumption data of some machineries of an agro-industrial test site

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizio

    2017-02-01

    Full Text Available For the management of a (micro-smart grid it is important to know the patters of the load profiles and of the generators. In this article the power consumption data obtained through a monitoring activity developed on a micro-smart grid in an agro-industrial test-site are presented. In particular, this reports the synthesis of the monitoring results of 5 loads (5 industrial machineries for crop micronization, corncob crashing and other similar processes. How these data were used within a monitoring and managing scheme of a micro-smart grid can be found in (E. Fabrizio, V. Branciforti, A. Costantino, M. Filippi, S. Barbero, G. Tecco, P. Mollo, A. Molino, 2017 [1]. The data can be useful for other researchers in order to create benchmarks of energy use input appropriate energy demand values in optimization tools for the industrial sector.

  5. The work-life balance and job satisfaction: results of Netherlands monitoring data

    NARCIS (Netherlands)

    Smulders, P.

    2006-01-01

    The seminar was divided into three parts: a conceptual discussion; an examination of job satisfaction and work organisation; and an examination of job satisfaction and work–life balance. Session three: job satisfaction and work-life balance

  6. Monitoring of a micro-smart grid: Power consumption data of some machineries of an agro-industrial test site

    OpenAIRE

    Fabrizio, Enrico; Biglia, Alessandro; Branciforti, Valeria; Filippi, Marco; Barbero, Silvia; Tecco, Giuseppe; Mollo, Paolo; Molino, Andrea

    2016-01-01

    For the management of a (micro)-smart grid it is important to know the patters of the load profiles and of the generators. In this article the power consumption data obtained through a monitoring activity developed on a micro-smart grid in an agro-industrial test-site are presented. In particular, this reports the synthesis of the monitoring results of 5 loads (5 industrial machineries for crop micronization, corncob crashing and other similar processes). How these data were used within a mon...

  7. Online model-based fault detection for grid connected PV systems monitoring

    KAUST Repository

    Harrou, Fouzi

    2017-12-14

    This paper presents an efficient fault detection approach to monitor the direct current (DC) side of photovoltaic (PV) systems. The key contribution of this work is combining both single diode model (SDM) flexibility and the cumulative sum (CUSUM) chart efficiency to detect incipient faults. In fact, unknown electrical parameters of SDM are firstly identified using an efficient heuristic algorithm, named Artificial Bee Colony algorithm. Then, based on the identified parameters, a simulation model is built and validated using a co-simulation between Matlab/Simulink and PSIM. Next, the peak power (Pmpp) residuals of the entire PV array are generated based on both real measured and simulated Pmpp values. Residuals are used as the input for the CUSUM scheme to detect potential faults. We validate the effectiveness of this approach using practical data from an actual 20 MWp grid-connected PV system located in the province of Adrar, Algeria.

  8. Assessment of LTE Wireless Access for Monitoring of Energy Distribution in the Smart Grid

    DEFF Research Database (Denmark)

    Madueño, Germán Corrales; Nielsen, Jimmy Jessen; Min Kim, Dong

    2016-01-01

    While LTE is becoming widely rolled out for human-type services, it is also a promising solution for cost-efficient connectivity of the smart grid monitoring equipment. This is a type of machine-to-machine (M2M) traffic that consists mainly of sporadic uplink transmissions. In such a setting......, the amount of traffic that can be served in a cell is not constrained by the data capacity, but rather by the signaling constraints in the PRACH, PDCCH, and PUSCH channels. In this paper we explore these limitations using a detailed simulation of the LTE access reservation protocol (ARP). We find that 1...... method, with reduced number of signaling messages, needs to be considered in standardization for M2M applications. Additionally we propose a tractable analytical model that accurately evaluates the outage for a given scenario at click-speed. The model accounts for the features of the PRACH, PDCCH, PDSCH...

  9. Development of Smart Grid for Community and Cyber based Landslide Hazard Monitoring and Early Warning System

    Science.gov (United States)

    Karnawati, D.; Wilopo, W.; Fathani, T. F.; Fukuoka, H.; Andayani, B.

    2012-12-01

    A Smart Grid is a cyber-based tool to facilitate a network of sensors for monitoring and communicating the landslide hazard and providing the early warning. The sensor is designed as an electronic sensor installed in the existing monitoring and early warning instruments, and also as the human sensors which comprise selected committed-people at the local community, such as the local surveyor, local observer, member of the local task force for disaster risk reduction, and any person at the local community who has been registered to dedicate their commitments for sending reports related to the landslide symptoms observed at their living environment. This tool is designed to be capable to receive up to thousands of reports/information at the same time through the electronic sensors, text message (mobile phone), the on-line participatory web as well as various social media such as Twitter and Face book. The information that should be recorded/ reported by the sensors is related to the parameters of landslide symptoms, for example the progress of cracks occurrence, ground subsidence or ground deformation. Within 10 minutes, this tool will be able to automatically elaborate and analyse the reported symptoms to predict the landslide hazard and risk levels. The predicted level of hazard/ risk can be sent back to the network of electronic and human sensors as the early warning information. The key parameters indicating the symptoms of landslide hazard were recorded/ monitored by the electrical and the human sensors. Those parameters were identified based on the investigation on geological and geotechnical conditions, supported with the laboratory analysis. The cause and triggering mechanism of landslide in the study area was also analysed in order to define the critical condition to launch the early warning. However, not only the technical but also social system were developed to raise community awareness and commitments to serve the mission as the human sensors, which will

  10. Running CMS remote analysis builder jobs on advanced resource connector middleware

    International Nuclear Information System (INIS)

    Edelmann, E; Happonen, K; Koivumäki, J; Lindén, T; Välimaa, J

    2011-01-01

    CMS user analysis jobs are distributed over the grid with the CMS Remote Analysis Builder application (CRAB). According to the CMS computing model the applications should run transparently on the different grid flavours in use. In CRAB this is handled with different plugins that are able to submit to different grids. Recently a CRAB plugin for submitting to the Advanced Resource Connector (ARC) middleware has been developed. The CRAB ARC plugin enables simple and fast job submission with full job status information available. CRAB can be used with a server which manages and monitors the grid jobs on behalf of the user. In the presentation we will report on the CRAB ARC plugin and on the status of integrating it with the CRAB server and compare this with using the gLite ARC interoperability method for job submission.

  11. Using CREAM and CEMonitor for job submission and management in the gLite middleware

    International Nuclear Information System (INIS)

    Aiftimiei, C; Andreetto, P; Bertocco, S; Dalla Fina, S; Dorigo, A; Frizziero, E; Gianelle, A; Mazzucato, M; Sgaravatto, M; Traldi, S; Zangrando, L; Marzolla, M; Lorenzo, P Mendez; Miccio, V

    2010-01-01

    In this paper we describe the use of CREAM and CEMonitor services for job submission and management within the gLite Grid middleware. Both CREAM and CEMonitor address one of the most fundamental operations of a Grid middleware, that is job submission and management. Specifically, CREAM is a job management service used for submitting, managing and monitoring computational jobs. CEMonitor is an event notification framework, which can be coupled with CREAM to provide the users with asynchronous job status change notifications. Both components have been integrated in the gLite Workload Management System by means of ICE (Interface to CREAM Environment). These software components have been released for production in the EGEE Grid infrastructure and, for what concerns the CEMonitor service, also in the OSG Grid. In this paper we report the current status of these services, the achieved results, and the issues that still have to be addressed.

  12. GANGA: a user-Grid interface for Atlas and LHCb

    CERN Document Server

    Harrison, K; Mato, P; Soroko, A; Tan, C L; Tull, C E; Brook, N; Jones, R W L

    2003-01-01

    The Gaudi/Athena and Grid Alliance (GANGA) is a front-end for the configuration, submission, monitoring, bookkeeping, output collection, and reporting of computing jobs run on a local batch system or on the grid. In particular, GANGA handles jobs that use applications written for the Gaudi software framework shared by the Atlas and LHCb experiments. GANGA exploits the commonality of Gaudi-based computing jobs, while insulating against grid-, batch- and framework-specific technicalities, to maximize end-user productivity in defining, configuring, and executing jobs. Designed for a python-based component architecture, GANGA has a modular underpinning and is therefore well placed for contributing to, and benefiting from, work in related projects. Its functionality is accessible both from a scriptable command-line interface, for expert users and automated tasks, and through a graphical interface, which simplifies the interaction with GANGA for beginning and c1asual users. This paper presents the GANGA design and ...

  13. The pilot way to Grid resources using glideinWMS

    CERN Document Server

    Sfiligoi, Igor; Holzman, Burt; Mhashilkar, Parag; Padhi, Sanjay; Wurthwrin, Frank

    Grid computing has become very popular in big and widespread scientific communities with high computing demands, like high energy physics. Computing resources are being distributed over many independent sites with only a thin layer of grid middleware shared between them. This deployment model has proven to be very convenient for computing resource providers, but has introduced several problems for the users of the system, the three major being the complexity of job scheduling, the non-uniformity of compute resources, and the lack of good job monitoring. Pilot jobs address all the above problems by creating a virtual private computing pool on top of grid resources. This paper presents both the general pilot concept, as well as a concrete implementation, called glideinWMS, deployed in the Open Science Grid.

  14. The Anatomy of a Grid portal

    International Nuclear Information System (INIS)

    Licari, Daniele; Calzolari, Federico

    2011-01-01

    In this paper we introduce a new way to deal with Grid portals referring to our implementation. L-GRID is a light portal to access the EGEE/EGI Grid infrastructure via Web, allowing users to submit their jobs from a common Web browser in a few minutes, without any knowledge about the Grid infrastructure. It provides the control over the complete lifecycle of a Grid Job, from its submission and status monitoring, to the output retrieval. The system, implemented as client-server architecture, is based on the Globus Grid middleware. The client side application is based on a java applet; the server relies on a Globus User Interface. There is no need of user registration on the server side, and the user needs only his own X.509 personal certificate. The system is user-friendly, secure (it uses SSL protocol, mechanism for dynamic delegation and identity creation in public key infrastructures), highly customizable, open source, and easy to install. The X.509 personal certificate does not get out from the local machine. It allows to reduce the time spent for the job submission, granting at the same time a higher efficiency and a better security level in proxy delegation and management.

  15. Constructing the ASCI computational grid

    Energy Technology Data Exchange (ETDEWEB)

    BEIRIGER,JUDY I.; BIVENS,HUGH P.; HUMPHREYS,STEVEN L.; JOHNSON,WILBUR R.; RHEA,RONALD E.

    2000-06-01

    The Accelerated Strategic Computing Initiative (ASCI) computational grid is being constructed to interconnect the high performance computing resources of the nuclear weapons complex. The grid will simplify access to the diverse computing, storage, network, and visualization resources, and will enable the coordinated use of shared resources regardless of location. To match existing hardware platforms, required security services, and current simulation practices, the Globus MetaComputing Toolkit was selected to provide core grid services. The ASCI grid extends Globus functionality by operating as an independent grid, incorporating Kerberos-based security, interfacing to Sandia's Cplant{trademark},and extending job monitoring services. To fully meet ASCI's needs, the architecture layers distributed work management and criteria-driven resource selection services on top of Globus. These services simplify the grid interface by allowing users to simply request ''run code X anywhere''. This paper describes the initial design and prototype of the ASCI grid.

  16. Weather radar performance monitoring using a metallic-grid ground-scatterer

    Science.gov (United States)

    Falconi, Marta Tecla; Montopoli, Mario; Marzano, Frank Silvio; Baldini, Luca

    2017-10-01

    The use of ground return signals is investigated for checks on the calibration of power measurements of a polarimetric C-band radar. To this aim, a peculiar permanent single scatterer (PSS) consisting of a big metallic roof with a periodic mesh grid structure and having a hemisphere-like shape is considered. The latter is positioned in the near-field region of the weather radar and its use, as a reference calibrator, shows fairly good results in terms of reflectivity and differential reflectivity monitoring. In addition, the use of PSS indirectly allows to check for the radar antenna de-pointing which is another issue usually underestimated when dealing with weather radars. Because of the periodic structure of the considered PSS, simulations of its electromagnetic behavior were relatively easy to perform. To this goal, we used an electromagnetic Computer-Aided-Design (CAD) with an ad-hoc numerical implementation of a full-wave solution to model our PSS in terms of reflectivity and differential reflectivity factor. Comparison of model results and experimental measurements are then shown in this work. Our preliminary investigation can pave the way for future studies aiming at characterizing ground-clutter returns in a more accurate way for radar calibration purposes.

  17. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    CERN Document Server

    INSPIRE-00416173; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machin...

  18. Research on the optimization of air quality monitoring station layout based on spatial grid statistical analysis method.

    Science.gov (United States)

    Li, Tianxin; Zhou, Xing Chen; Ikhumhen, Harrison Odion; Difei, An

    2018-05-01

    In recent years, with the significant increase in urban development, it has become necessary to optimize the current air monitoring stations to reflect the quality of air in the environment. Highlighting the spatial representation of some air monitoring stations using Beijing's regional air monitoring station data from 2012 to 2014, the monthly mean particulate matter concentration (PM10) in the region was calculated and through the IDW interpolation method and spatial grid statistical method using GIS, the spatial distribution of PM10 concentration in the whole region was deduced. The spatial distribution variation of districts in Beijing using the gridding model was performed, and through the 3-year spatial analysis, PM10 concentration data including the variation and spatial overlay (1.5 km × 1.5 km cell resolution grid), the spatial distribution result obtained showed that the total PM10 concentration frequency variation exceeded the standard. It is very important to optimize the layout of the existing air monitoring stations by combining the concentration distribution of air pollutants with the spatial region using GIS.

  19. Adaptive Monitoring and Control Architectures for Power Distribution Grids over Heterogeneous ICT Networks

    DEFF Research Database (Denmark)

    Olsen, Rasmus Løvenstein; Hägerling, Christian; Kurtz, Fabian M.

    2014-01-01

    ) and the quality of the power may become costly. In this light, Smart Grids may provide an answer towards a more active and efficient electrical network. The EU project SmartC2Net aims to enable smart grid operations over imperfect, heterogeneous general purpose networks which poses a significant challenge...... to the reliability due to the stochastic behaviour found in such networks. Therefore, key concepts are presented in this paper targeting the support of proper smart grid control in these network environments. An overview on the required Information and Communication Technology (ICT) architecture and its...

  20. Grid reliability

    CERN Document Server

    Saiz, P; Rocha, R; Andreeva, J

    2007-01-01

    We are offering a system to track the efficiency of different components of the GRID. We can study the performance of both the WMS and the data transfers At the moment, we have set different parts of the system for ALICE, ATLAS, CMS and LHCb. None of the components that we have developed are VO specific, therefore it would be very easy to deploy them for any other VO. Our main goal is basically to improve the reliability of the GRID. The main idea is to discover as soon as possible the different problems that have happened, and inform the responsible. Since we study the jobs and transfers issued by real users, we see the same problems that users see. As a matter of fact, we see even more problems than the end user does, since we are also interested in following up the errors that GRID components can overcome by themselves (like for instance, in case of a job failure, resubmitting the job to a different site). This kind of information is very useful to site and VO administrators. They can find out the efficien...

  1. A Cognitive Radio-Based Energy-Efficient System for Power Transmission Line Monitoring in Smart Grids

    Directory of Open Access Journals (Sweden)

    Saeed Ahmed

    2017-01-01

    Full Text Available The research in industry and academia on smart grids is predominantly focused on the regulation of generated power and management of its consumption. Because transmission of bulk-generated power to the consumer is immensely reliant on secure and efficient transmission grids, comprising huge electrical and mechanical assets spanning a vast geographic area, there is an impending need to focus on the transmission grids as well. Despite the challenges in wireless technologies for SGs, cognitive radio networks are considered promising for provisioning of communications services to SGs. In this paper, first, we present an IEEE 802.22 wireless regional area network cognitive radio-based network model for smart monitoring of transmission lines. Then, for a prolonged lifetime of battery finite monitoring network, we formulate the spectrum resource allocation problem as an energy efficiency maximization problem, which is a nonlinear integer programming problem. To solve this problem in an easier way, we propose an energy-efficient resource-assignment scheme based on the Hungarian method. Performance analysis shows that, compared to a pure opportunistic assignment scheme with a throughput maximization objective and compared to a random scheme, the proposed scheme results in an enhanced lifetime while consuming less battery energy without compromising throughput performance.

  2. AstroGrid-D: Grid technology for astronomical science

    Science.gov (United States)

    Enke, Harry; Steinmetz, Matthias; Adorf, Hans-Martin; Beck-Ratzka, Alexander; Breitling, Frank; Brüsemeister, Thomas; Carlson, Arthur; Ensslin, Torsten; Högqvist, Mikael; Nickelt, Iliya; Radke, Thomas; Reinefeld, Alexander; Reiser, Angelika; Scholl, Tobias; Spurzem, Rainer; Steinacker, Jürgen; Voges, Wolfgang; Wambsganß, Joachim; White, Steve

    2011-02-01

    We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites (Section 2.1), and advanced applications for specific scientific purposes (Section 2.2), such as a connection to robotic telescopes (Section 2.2.3). We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.

  3. Damage mapping in structural health monitoring using a multi-grid architecture

    Energy Technology Data Exchange (ETDEWEB)

    Mathews, V. John [Dept. of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT 84112 (United States)

    2015-03-31

    This paper presents a multi-grid architecture for tomography-based damage mapping of composite aerospace structures. The system employs an array of piezo-electric transducers bonded on the structure. Each transducer may be used as an actuator as well as a sensor. The structure is excited sequentially using the actuators and the guided waves arriving at the sensors in response to the excitations are recorded for further analysis. The sensor signals are compared to their baseline counterparts and a damage index is computed for each actuator-sensor pair. These damage indices are then used as inputs to the tomographic reconstruction system. Preliminary damage maps are reconstructed on multiple coordinate grids defined on the structure. These grids are shifted versions of each other where the shift is a fraction of the spatial sampling interval associated with each grid. These preliminary damage maps are then combined to provide a reconstruction that is more robust to measurement noise in the sensor signals and the ill-conditioned problem formulation for single-grid algorithms. Experimental results on a composite structure with complexity that is representative of aerospace structures included in the paper demonstrate that for sufficiently high sensor densities, the algorithm of this paper is capable of providing damage detection and characterization with accuracy comparable to traditional C-scan and A-scan-based ultrasound non-destructive inspection systems quickly and without human supervision.

  4. Secure Real-Time Monitoring and Management of Smart Distribution Grid Using Shared Cellular Networks

    NARCIS (Netherlands)

    Nielsen, J.J.; Ganem, H.; Jorguseski, L.; Alic, K.; Smolnikar, M.; Zhu, Z.; Pratas, N.K.; Golinski, M.; Zhang, H.; Kuhar, U.; Fan, Z.; Svigelj, A.

    2017-01-01

    Electricity production and distribution is facing two major changes. First, production is shifting from classical energy sources such as coal and nuclear power toward renewable resources such as solar and wind. Second, consumption in the low voltage grid is expected to grow significantly due to the

  5. Employing peer-to-peer software distribution in ALICE Grid Services to enable opportunistic use of OSG resources

    CERN Multimedia

    CERN. Geneva; Sakrejda, Iwona

    2012-01-01

    The ALICE Grid infrastructure is based on AliEn, a lightweight open source framework built on Web Services and a Distributed Agent Model in which job agents are submitted onto a grid site to prepare the environment and pull work from a central task queue located at CERN. In the standard configuration, each ALICE grid site supports an ALICE-specific VO box as a single point of contact between the site and the ALICE central services. VO box processes monitor site utilization and job requests (ClusterMonitor), monitor dynamic job and site properties (MonaLisa), perform job agent submission (CE) and deploy job-specific software (PackMan). In particular, requiring a VO box at each site simplifies deployment of job software, done onto a shared file system at the site, and adds redundancy to the overall Grid system. ALICE offline computing, however, has also implemented a peer-to-peer method (based on BitTorrent) for downloading job software directly onto each worker node as needed. By utilizing both this peer-...

  6. CDF experience with monte carlo production using LCG grid

    International Nuclear Information System (INIS)

    Griso, S P; Lucchesi, D; Compostella, G; Sfiligoi, I; Cesini, D

    2008-01-01

    The upgrades of the Tevatron collider and CDF detector have considerably increased the demand on computing resources, in particular for Monte Carlo production. This has forced the collaboration to move beyond the usage of dedicated resources and start exploiting the Grid. The CDF Analysis Farm (CAF) model has been reimplemented into LcgCAF in order to access Grid resources by using the LCG/EGEE middleware. Many sites in Italy and in Europe are accessed through this portal by CDF users mainly to produce Monte Carlo data but also for other analysis jobs. We review here the setup used to submit jobs to Grid sites and retrieve the output, including CDF-specific configuration of some Grid components. We also describe the batch and interactive monitor tools developed to allow users to verify the jobs status during their lifetime in the Grid environment. Finally we analyze the efficiency and typical failure modes of the current Grid infrastructure reporting the performances of different parts of the system used

  7. Length of stay for patients undergoing invasive electrode monitoring with stereoelectroencephalography and subdural grids correlates positively with increased institutional profitability.

    Science.gov (United States)

    Chan, Alvin Y; Kharrat, Sohayla; Lundeen, Kelly; Mnatsakanyan, Lilit; Sazgar, Mona; Sen-Gupta, Indranil; Lin, Jack J; Hsu, Frank P K; Vadera, Sumeet

    2017-06-01

    Lowering the length of stay (LOS) is thought to potentially decrease hospital costs and is a metric commonly used to manage capacity. Patients with epilepsy undergoing intracranial electrode monitoring may have longer LOS because the time to seizure is difficult to predict or control. This study investigates the effect of economic implications of increased LOS in patients undergoing invasive electrode monitoring for epilepsy. We retrospectively collected and analyzed patient data for 76 patients who underwent invasive monitoring with either subdural grid (SDG) implantation or stereoelectroencephalography (SEEG) over 2 years at our institution. Data points collected included invasive electrode type, LOS, profit margin, contribution margins, insurance type, and complication rates. LOS correlated positively with both profit and contribution margins, meaning that as LOS increased, both the profit and contribution margins rose, and there was a low rate of complications in this patient group. This relationship was seen across a variety of insurance providers. These data suggest that LOS may not be the best metric to assess invasive monitoring patients (i.e., SEEG or SDG), and increased LOS does not necessarily equate with lower or negative institutional financial gain. Further research into LOS should focus on specific specialties, as each may differ in terms of financial implications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  8. Cyber-Physical Attack-Resilient Wide-Area Monitoring, Protection, and Control for the Power Grid

    Energy Technology Data Exchange (ETDEWEB)

    Ashok, Aditya; Govindarasu, Manimaran; Wang, Jianhui

    2017-07-01

    Cyber security and resiliency of Wide-Area Monitoring, Protection and Control (WAMPAC) applications is critically important to ensure secure, reliable, and economic operation of the bulk power system. WAMPAC relies heavily on the security of measurements and control commands transmitted over wide-area communication networks for real-time operational, protection, and control functions. Also, the current “N-1 security criteria” for grid operation is inadequate to address malicious cyber events and therefore it is important to fundamentally redesign WAMPAC and to enhance Energy Management System (EMS) applications to make them attack-resilient. In this paper, we propose an end-to-end defense-in-depth architecture for attack-resilient WAMPAC that addresses resilience at both the infrastructure layer and the application layers. Also, we propose an attack-resilient cyber-physical security framework that encompasses the entire security life cycle including risk assessment, attack prevention, attack detection, attack mitigation, and attack resilience. The overarching objective of this paper is to provide a broad scope that comprehensively describes most of the major research issues and potential solutions in the context of cyber-physical security of WAMPAC for the power grid.

  9. Job submission and management through web services the experience with the CREAM service

    CERN Document Server

    Aiftimiei, C; Bertocco, S; Fina, S D; Ronco, S D; Dorigo, A; Gianelle, A; Marzolla, M; Mazzucato, M; Sgaravatto, M; Verlato, M; Zangrando, L; Corvo, M; Miccio, V; Sciabà, A; Cesini, D; Dongiovanni, D; Grandi, C

    2008-01-01

    Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of ...

  10. Pseudo-interactive monitoring in distributed computing

    International Nuclear Information System (INIS)

    Sfiligoi, I.; Bradley, D.; Livny, M.

    2009-01-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  11. Pseudo-interactive monitoring in distributed computing

    International Nuclear Information System (INIS)

    Sfiligoi, I; Bradley, D; Livny, M

    2010-01-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  12. Pseudo-interactive monitoring in distributed computing

    Energy Technology Data Exchange (ETDEWEB)

    Sfiligoi, I.; /Fermilab; Bradley, D.; Livny, M.; /Wisconsin U., Madison

    2009-05-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  13. BaBar MC production on the Canadian grid using a web services approach

    International Nuclear Information System (INIS)

    Agarwal, A; Armstrong, P; Desmarais, R; Gable, I; Popov, S; Ramage, S; Schaffer, S; Sobie, C; Sobie, R; Sulivan, T; Vanderster, D; Mateescu, G; Podaima, W; Charbonneau, A; Impey, R; Viswanathan, M; Quesnel, D

    2008-01-01

    The present paper highlights the approach used to design and implement a web services based BaBar Monte Carlo (MC) production grid using Globus Toolkit version 4. The grid integrates the resources of two clusters at the University of Victoria, using the ClassAd mechanism provided by the Condor-G metascheduler. Each cluster uses the Portable Batch System (PBS) as its local resource management system (LRMS). Resource brokering is provided by the Condor matchmaking process, whereby the job and resource attributes are expressed as ClassAds. The important features of the grid are automatic registering of resource ClassAds to the central registry, ClassAds extraction from the registry to the metascheduler for matchmaking, and the incorporation of input/output file staging. Web-based monitoring is employed to track the status of grid resources and the jobs for an efficient operation of the grid. The performance of this new grid for BaBar jobs, and the existing Canadian computational grid (GridX1) based on Globus Toolkit version 2 is found to be consistent

  14. CMS Monte Carlo production in the WLCG computing grid

    International Nuclear Information System (INIS)

    Hernandez, J M; Kreuzer, P; Hof, C; Khomitch, A; Mohapatra, A; Filippis, N D; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Weirdt, S D; Maes, J; Mulders, P v; Villella, I; Wakefield, S; Guan, W; Fanfani, A; Evans, D; Flossdorf, A

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

  15. Wide area monitoring, protection and control systems the enabler for smarter grids

    CERN Document Server

    Vaccaro, Alfredo

    2016-01-01

    This book is designed to give electrical and electronic engineers involved in the design, operation and maintenance of electrical power networks, the knowledge and skills necessary to deploy synchronised measurement technology (SMT) in Wide Area Monitoring, Protection And Control (WAMPAC) applications.

  16. Monitoring Agent for Detecting Malicious Packet Drops for Wireless Sensor Networks in the Microgrid and Grid-Enabled Vehicles

    Directory of Open Access Journals (Sweden)

    Jongbin Ko

    2012-05-01

    Full Text Available Of the range of wireless communication technologies, wireless sensor networks (WSN will be one of the most appropriate technologies for the Microgrid and Grid-enabled Vehicles in the Smartgrid. To ensure the security of WSN, the detection of attacks is more efficient than their prevention because of the lack of computing power. Malicious packet drops are the easiest means of attacking WSNs. Thus, the sensors used for constructing a WSN require a packet drop monitoring agent, such as Watchdog. However, Watchdog has a partial drop problem such that an attacker can manipulate the packet dropping rate below the minimum misbehaviour monitoring threshold. Furthermore, Watchdog does not consider real traffic situations, such as congestion and collision, and so it has no way of recognizing whether a packet drop is due to a real attack or network congestion. In this paper, we propose a malicious packet drop monitoring agent, which considers traffic conditions. We used the actual traffic volume on neighbouring nodes and the drop rate while monitoring a sending node for specific period. It is more effective in real network scenarios because unlike Watchdog it considers the actual traffic, which only uses the Pathrater. Moreover, our proposed method does not require authentication, packet encryption or detection packets. Thus, there is a lower likelihood of detection failure due to packet spoofing, Man-In-the Middle attacks or Wormhole attacks. To test the suitability of our proposed concept for a series of network scenarios, we divided the simulations into three types: one attack node, more than one attack nodes and no attack nodes. The results of the simulations meet our expectations.

  17. IDENTIFICATION OF MUTUAL INFLUENCE OF BENDING AND TORSIONAL STRAINS OF THE REINFORCED CONCRETE SPACE GRID FLOOR AS PART OF THE MONITORING OF ITS ERECTION

    Directory of Open Access Journals (Sweden)

    Plotnikov Alexey Nikolaevich

    2012-10-01

    Full Text Available The author presents the results of measurements of total deformations of the space-grid floor in relation to the torsional strain of beams and the rigidity of beams in bending and torsion while monitoring the erection of the floor of a building. Any space grid system is utterly sensitive to changes in relations between the rigidity of elements. No experimental data covering space grid floors or any method of analysis of their stress-strain state are available. The author performed the assessment of interrelations between the rigidity of some beams in the two directions by means of a full-scale loading test (monitoring of the monolithic space grid floor, beam size 8.0 × 9.2 m. The purpose of the assessment was to confirm the bearing capacity and the design patterns based on deflections and stresses of elements to select the operational reinforcement value. Monolithic concrete was used to perform the load test. As a result, the width of concrete ribs was found uneven. In the design of reinforced concrete space rib floors it is advisable to develop detailed models of structures through the employment of the finite element method due to the significant sensitivity of the system to distribution and redistribution of stresses. Large spans of monolithic space rib floors require the monitoring of the stress-strain state and computer simulations to adjust the design pattern on the basis of the monitoring results.

  18. The LHCb Grid Simulation

    CERN Multimedia

    Baranov, Alexander

    2016-01-01

    The LHCb Grid access if based on the LHCbDirac system. It provides access to data and computational resources to researchers with different geographical locations. The Grid has a hierarchical topology with multiple sites distributed over the world. The sites differ from each other by their number of CPUs, amount of disk storage and connection bandwidth. These parameters are essential for the Grid work. Moreover, job scheduling and data distribution strategy have a great impact on the grid performance. However, it is hard to choose an appropriate algorithm and strategies as they need a lot of time to be tested on the real grid. In this study, we describe the LHCb Grid simulator. The simulator reproduces the LHCb Grid structure with its sites and their number of CPUs, amount of disk storage and bandwidth connection. We demonstrate how well the simulator reproduces the grid work, show its advantages and limitations. We show how well the simulator reproduces job scheduling and network anomalies, consider methods ...

  19. AliEn - ALICE environment on the GRID

    International Nuclear Information System (INIS)

    Saiz, P.; Aphecetche, L.; Buncic, P.; Piskac, R.; Revsbech, J.-E.; Sego, V.

    2003-01-01

    AliEn (http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system

  20. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    International Nuclear Information System (INIS)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware. (paper)

  1. HappyFace as a generic monitoring tool for HEP experiments

    Science.gov (United States)

    Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Quadt, Arnulf; Rzehorz, Gerhard

    2015-12-01

    The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexity. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and stores the information and the status of different HEP monitoring resources into the common database of HappyFace. The system displays the information and the status through a single interface. However, this model of HappyFace relied on the monitoring resources which are always under development in the HEP experiments. Consequently, HappyFace needed to have direct access methods to the grid application and grid service layers in the different HEP grid systems. To cope with this issue, we use a reliable HEP software repository, the CernVM File System. We propose a new implementation and an architecture of HappyFace, the so-called grid-enabled HappyFace. It allows its basic framework to connect directly to the grid user applications and the grid collective services, without involving the monitoring resources in the HEP grid systems. This approach gives HappyFace several advantages: Portability, to provide an independent and generic monitoring system among the HEP grid systems. Eunctionality, to allow users to perform various diagnostic tools in the individual HEP grid systems and grid sites. Elexibility, to make HappyFace beneficial and open for the various distributed grid computing environments. Different grid-enabled modules, to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites, have been implemented. The new HappyFace system has been successfully integrated and now it displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective

  2. Transient stability enhancement of modern power grid using predictive Wide-Area Monitoring and Control

    Science.gov (United States)

    Yousefian, Reza

    This dissertation presents a real-time Wide-Area Control (WAC) designed based on artificial intelligence for large scale modern power systems transient stability enhancement. The WAC using the measurements available from Phasor Measurement Units (PMUs) at generator buses, monitors the global oscillations in the system and optimally augments the local excitation system of the synchronous generators. The complexity of the power system stability problem along with uncertainties and nonlinearities makes the conventional modeling non-practical or inaccurate. In this work Reinforcement Learning (RL) algorithm on the benchmark of Neural Networks (NNs) is used to map the nonlinearities of the system in real-time. This method different from both the centralized and the decentralized control schemes, employs a number of semi-autonomous agents to collaborate with each other to perform optimal control theory well-suited for WAC applications. Also, to handle the delays in Wide-Area Monitoring (WAM) and adapt the RL toward the robust control design, Temporal Difference (TD) is proposed as a solver for RL problem or optimal cost function. However, the main drawback of such WAC design is that it is challenging to determine if an offline trained network is valid to assess the stability of the power system once the system is evolved to a different operating state or network topology. In order to address the generality issue of NNs, a value priority scheme is proposed in this work to design a hybrid linear and nonlinear controllers. The algorithm so-called supervised RL is based on mixture of experts, where it is initialized by linear controller and as the performance and identification of the RL controller improves in real-time switches to the other controller. This work also focuses on transient stability and develops Lyapunov energy functions for synchronous generators to monitor the stability stress of the system. Using such energies as a cost function guarantees the convergence

  3. Control and monitoring of electrical distribution grid using automatic meter reading systems

    Energy Technology Data Exchange (ETDEWEB)

    Venancio, Antonio [EDP, Lisboa (Portugal); Loureiro, Carlos [Edinfor - Sistemas Informaticos, Porto (Portugal); Costa, Joao [INESC, Lisboa (Portugal); Rodrigues, Antonio [INETI/DEL, Lisboa (Portugal); Antunes, Jose [EID, Monte da Caparica (Portugal)

    2000-07-01

    Technological innovation undertaken in recent years led to the introduction of electronics in metering apparatus. Deregulation of electricity markets around the world is pushing players to face competition. Supply companies are thus looking for ways to improve efficiency and gain Client's preference and confidence. Taking advantage of the technological innovation, AMR and Energy Management Systems can be a vital weapon in such a competitive scenario. Providing a wide range of services, from remote metering of electricity consumption to monitoring of supply quality, these systems can arm electrical utilities with necessary tools to improve supply efficiency and customer services. Following these trends, electrical utilities around Europe and all over the world are testing AMR and Energy Management solutions, and tailoring systems to their needs and modus operandi. Based on previous experience, gained with R&D work since 1991, EDP, the Portuguese Electrical Utility, is currently working on such systems and planning field trials in several scenarios. This paper presents EDP's approach towards AMR and Energy Management Systems, describing the system architecture and services; the way interoperability with existing commercial and technical management systems is planned; how technical solutions are planned to be tested and field trials are designed. Finally, some results of the ongoing projects will be presented.

  4. A Cognitive Radio-Based Energy-Efficient System for Power Transmission Line Monitoring in Smart Grids

    OpenAIRE

    Ahmed, Saeed; Lee, Young Doo; Hyun, Seung Ho; Koo, Insoo

    2017-01-01

    The research in industry and academia on smart grids is predominantly focused on the regulation of generated power and management of its consumption. Because transmission of bulk-generated power to the consumer is immensely reliant on secure and efficient transmission grids, comprising huge electrical and mechanical assets spanning a vast geographic area, there is an impending need to focus on the transmission grids as well. Despite the challenges in wireless technologies for SGs, cognitive r...

  5. An Efficient Audio Coding Scheme for Quantitative and Qualitative Large Scale Acoustic Monitoring Using the Sensor Grid Approach

    Directory of Open Access Journals (Sweden)

    Félix Gontier

    2017-11-01

    Full Text Available The spreading of urban areas and the growth of human population worldwide raise societal and environmental concerns. To better address these concerns, the monitoring of the acoustic environment in urban as well as rural or wilderness areas is an important matter. Building on the recent development of low cost hardware acoustic sensors, we propose in this paper to consider a sensor grid approach to tackle this issue. In this kind of approach, the crucial question is the nature of the data that are transmitted from the sensors to the processing and archival servers. To this end, we propose an efficient audio coding scheme based on third octave band spectral representation that allows: (1 the estimation of standard acoustic indicators; and (2 the recognition of acoustic events at state-of-the-art performance rate. The former is useful to provide quantitative information about the acoustic environment, while the latter is useful to gather qualitative information and build perceptually motivated indicators using for example the emergence of a given sound source. The coding scheme is also demonstrated to transmit spectrally encoded data that, reverted to the time domain using state-of-the-art techniques, are not intelligible, thus protecting the privacy of citizens.

  6. Real Time Monitoring and Supervisory Control of Distribution Load Based on Generic Load Allocation: A Smart Grid Solution

    Directory of Open Access Journals (Sweden)

    Anwer Ahmed Memon

    2014-04-01

    Full Text Available Our work is the small part of the smart grid system. This is regarding the check and balance of power consumption at the consumer level. It is a well known fact that the consumers are allocated a fixed load according to their requirement at the time of application for the electricity connection. When the consumer increases its load and does not inform the power company, the result is the overloading of the system. This paper presents a solution regarding distribution and load allocation to each customer. If the customer uses power greater than the load allocated, further power is not provided and consequently that appliance is not turned on unless the total load must not be decreased than the allocated load. This is achieved by designing a processor controlled system that measures the power on main line and also the power taken by each device. Now when a device is turned on, its power is measured by the controller and compares it with the main line power, and when the device consumes some power consequently main line power will also be increased thus this main line power is monitored and if it exceeds particular limit that device is turned off through its relay

  7. An Efficient Audio Coding Scheme for Quantitative and Qualitative Large Scale Acoustic Monitoring Using the Sensor Grid Approach.

    Science.gov (United States)

    Gontier, Félix; Lagrange, Mathieu; Aumond, Pierre; Can, Arnaud; Lavandier, Catherine

    2017-11-29

    The spreading of urban areas and the growth of human population worldwide raise societal and environmental concerns. To better address these concerns, the monitoring of the acoustic environment in urban as well as rural or wilderness areas is an important matter. Building on the recent development of low cost hardware acoustic sensors, we propose in this paper to consider a sensor grid approach to tackle this issue. In this kind of approach, the crucial question is the nature of the data that are transmitted from the sensors to the processing and archival servers. To this end, we propose an efficient audio coding scheme based on third octave band spectral representation that allows: (1) the estimation of standard acoustic indicators; and (2) the recognition of acoustic events at state-of-the-art performance rate. The former is useful to provide quantitative information about the acoustic environment, while the latter is useful to gather qualitative information and build perceptually motivated indicators using for example the emergence of a given sound source. The coding scheme is also demonstrated to transmit spectrally encoded data that, reverted to the time domain using state-of-the-art techniques, are not intelligible, thus protecting the privacy of citizens.

  8. Analyzing Grid Log Data with Affinity Propagation

    NARCIS (Netherlands)

    Modena, G.; van Someren, M.W.; Ali, M; Bosse, T.; Hindriks, K.V.; Hoogendoorn, M.; Jonker, C.M; Treur, J.

    2013-01-01

    In this paper we present an unsupervised learning approach to detect meaningful job traffic patterns in Grid log data. Manual anomaly detection on modern Grid environments is troublesome given their increasing complexity, the distributed, dynamic topology of the network and heterogeneity of the jobs

  9. Enabling Campus Grids with Open Science Grid Technology

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Pordes, Ruth; Bockelman, Brian; Swanson, David

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  10. Enabling campus grids with open science grid technology

    Energy Technology Data Exchange (ETDEWEB)

    Weitzel, Derek [Nebraska U.; Bockelman, Brian [Nebraska U.; Swanson, David [Nebraska U.; Fraser, Dan [Argonne; Pordes, Ruth [Fermilab

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  11. Information-Quality based LV-Grid-Monitoring Framework and its Application to Power-Quality Control

    DEFF Research Database (Denmark)

    Findrik, Mislav; Kristensen, Thomas le Fevre; Hinterhofer, Thomas

    2015-01-01

    The integration of unpredictable renewable energy sources into the low voltage (LV) power grid results in new challenges when it comes to ensuring power quality in the electrical grid. Addressing this problem requires control of not only the secondary substation but also control of flexible asset...

  12. Storage element performance optimization for CMS analysis jobs

    International Nuclear Information System (INIS)

    Behrmann, G; Dahlblom, J; Guldmyr, J; Happonen, K; Lindén, T

    2012-01-01

    Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I

  13. Towards a centralized Grid Speedometer

    CERN Document Server

    Gutsche, Oliver

    2013-01-01

    Given the distributed nature of the grid and the way CPU resources are pledged and scared around the globe, VOs are facing the challenge to monitor the use of these resources. For CMS and the operation of centralized workflows the monitoring of how many production jobs are running and pending in the Glidein WMS production pools is very important. The Dashboard SSB (Site Status Board) provides a very flexible framework to collect, aggregate and visualize data. The CMS production monitoring team uses the SSB to define the metrics that have to be monitored and the alarms that have to be set. During the integration of the CMS production monitoring into the SSB, several enhancements to the core functionality of the SSB were implemented; all in a generic way, so that other VOs using the SSB can use them as well. Alongside these enhancements, there were a few changes to the core of the SSB framework from which the CMS production team was able to benefit. We will present the details of the implementation and the adva...

  14. Towards a centralized Grid Speedometer

    International Nuclear Information System (INIS)

    Dzhunov, I; Andreeva, J; Saiz, P; Fajardo, E; Gutsche, O; Luyckx, S

    2014-01-01

    Given the distributed nature of the Worldwide LHC Computing Grid and the way CPU resources are pledged and shared around the globe, Virtual Organizations (VOs) face the challenge of monitoring the use of these resources. For CMS and the operation of centralized workflows, the monitoring of how many production jobs are running and pending in the Glidein WMS production pools is very important. The Dashboard Site Status Board (SSB) provides a very flexible framework to collect, aggregate and visualize data. The CMS production monitoring team uses the SSB to define the metrics that have to be monitored and the alarms that have to be raised. During the integration of CMS production monitoring into the SSB, several enhancements to the core functionality of the SSB were required; They were implemented in a generic way, so that other VOs using the SSB can exploit them. Alongside these enhancements, there were a number of changes to the core of the SSB framework. This paper presents the details of the implementation and the advantages for current and future usage of the new features in SSB.

  15. Towards an actor-driven workflow management system for Grids

    NARCIS (Netherlands)

    Berretz, F.; Skorupa, S.; Sander, V.; Belloum, A.

    2010-01-01

    Currently, most workflow management systems in Grid environments provide push-oriented job distribution strategies, where jobs are explicitly delegated to resources. In those scenarios the dedicated resources execute submitted jobs according to the request of a workflow engine or Grid wide

  16. Daily precipitation grids for Austria since 1961—development and evaluation of a spatial dataset for hydroclimatic monitoring and modelling

    Science.gov (United States)

    Hiebl, Johann; Frei, Christoph

    2017-03-01

    Spatial precipitation datasets that are long-term consistent, highly resolved and extend over several decades are an increasingly popular basis for modelling and monitoring environmental processes and planning tasks in hydrology, agriculture, energy resources management, etc. Here, we present a grid dataset of daily precipitation for Austria meant to promote such applications. It has a grid spacing of 1 km, extends back till 1961 and is continuously updated. It is constructed with the classical two-tier analysis, involving separate interpolations for mean monthly precipitation and daily relative anomalies. The former was accomplished by kriging with topographic predictors as external drift utilising 1249 stations. The latter is based on angular distance weighting and uses 523 stations. The input station network was kept largely stationary over time to avoid artefacts on long-term consistency. Example cases suggest that the new analysis is at least as plausible as previously existing datasets. Cross-validation and comparison against experimental high-resolution observations (WegenerNet) suggest that the accuracy of the dataset depends on interpretation. Users interpreting grid point values as point estimates must expect systematic overestimates for light and underestimates for heavy precipitation as well as substantial random errors. Grid point estimates are typically within a factor of 1.5 from in situ observations. Interpreting grid point values as area mean values, conditional biases are reduced and the magnitude of random errors is considerably smaller. Together with a similar dataset of temperature, the new dataset (SPARTACUS) is an interesting basis for modelling environmental processes, studying climate change impacts and monitoring the climate of Austria.

  17. Daily precipitation grids for Austria since 1961—development and evaluation of a spatial dataset for hydroclimatic monitoring and modelling

    Science.gov (United States)

    Hiebl, Johann; Frei, Christoph

    2018-04-01

    Spatial precipitation datasets that are long-term consistent, highly resolved and extend over several decades are an increasingly popular basis for modelling and monitoring environmental processes and planning tasks in hydrology, agriculture, energy resources management, etc. Here, we present a grid dataset of daily precipitation for Austria meant to promote such applications. It has a grid spacing of 1 km, extends back till 1961 and is continuously updated. It is constructed with the classical two-tier analysis, involving separate interpolations for mean monthly precipitation and daily relative anomalies. The former was accomplished by kriging with topographic predictors as external drift utilising 1249 stations. The latter is based on angular distance weighting and uses 523 stations. The input station network was kept largely stationary over time to avoid artefacts on long-term consistency. Example cases suggest that the new analysis is at least as plausible as previously existing datasets. Cross-validation and comparison against experimental high-resolution observations (WegenerNet) suggest that the accuracy of the dataset depends on interpretation. Users interpreting grid point values as point estimates must expect systematic overestimates for light and underestimates for heavy precipitation as well as substantial random errors. Grid point estimates are typically within a factor of 1.5 from in situ observations. Interpreting grid point values as area mean values, conditional biases are reduced and the magnitude of random errors is considerably smaller. Together with a similar dataset of temperature, the new dataset (SPARTACUS) is an interesting basis for modelling environmental processes, studying climate change impacts and monitoring the climate of Austria.

  18. Grid Computing

    CERN Document Server

    Yen, Eric

    2008-01-01

    Based on the Grid Computing: International Symposium on Grid Computing (ISGC) 2007, held in Taipei, Taiwan in March of 2007, this title presents the grid solutions and research results in grid operations, grid middleware, biomedical operations, and e-science applications. It is suitable for graduate-level students in computer science.

  19. The Grid2003 Production Grid Principles and Practice

    CERN Document Server

    Foster, I; Gose, S; Maltsev, N; May, E; Rodríguez, A; Sulakhe, D; Vaniachine, A; Shank, J; Youssef, S; Adams, D; Baker, R; Deng, W; Smith, J; Yu, D; Legrand, I; Singh, S; Steenberg, C; Xia, Y; Afaq, A; Berman, E; Annis, J; Bauerdick, L A T; Ernst, M; Fisk, I; Giacchetti, L; Graham, G; Heavey, A; Kaiser, J; Kuropatkin, N; Pordes, R; Sekhri, V; Weigand, J; Wu, Y; Baker, K; Sorrillo, L; Huth, J; Allen, M; Grundhoefer, L; Hicks, J; Luehring, F C; Peck, S; Quick, R; Simms, S; Fekete, G; Van den Berg, J; Cho, K; Kwon, K; Son, D; Park, H; Canon, S; Jackson, K; Konerding, D E; Lee, J; Olson, D; Sakrejda, I; Tierney, B; Green, M; Miller, R; Letts, J; Martin, T; Bury, D; Dumitrescu, C; Engh, D; Gardner, R; Mambelli, M; Smirnov, Y; Voeckler, J; Wilde, M; Zhao, Y; Zhao, X; Avery, P; Cavanaugh, R J; Kim, B; Prescott, C; Rodríguez, J; Zahn, A; McKee, S; Jordan, C; Prewett, J; Thomas, T; Severini, H; Clifford, B; Deelman, E; Flon, L; Kesselman, C; Mehta, G; Olomu, N; Vahi, K; De, K; McGuigan, P; Sosebee, M; Bradley, D; Couvares, P; De Smet, A; Kireyev, C; Paulson, E; Roy, A; Koranda, S; Moe, B; Brown, B; Sheldon, P

    2004-01-01

    The Grid2003 Project has deployed a multi-virtual organization, application-driven grid laboratory ("GridS") that has sustained for several months the production-level services required by physics experiments of the Large Hadron Collider at CERN (ATLAS and CMS), the Sloan Digital Sky Survey project, the gravitational wave search experiment LIGO, the BTeV experiment at Fermilab, as well as applications in molecular structure analysis and genome analysis, and computer science research projects in such areas as job and data scheduling. The deployed infrastructure has been operating since November 2003 with 27 sites, a peak of 2800 processors, work loads from 10 different applications exceeding 1300 simultaneous jobs, and data transfers among sites of greater than 2 TB/day. We describe the principles that have guided the development of this unique infrastructure and the practical experiences that have resulted from its creation and use. We discuss application requirements for grid services deployment and configur...

  20. Experience with Remote Job Execution

    International Nuclear Information System (INIS)

    Lynch, Vickie E.; Cobb, John W; Green, Mark L.; Kohl, James Arthur; Miller, Stephen D.; Ren, Shelly; Smith, Bradford C.; Vazhkudai, Sudharshan S.

    2008-01-01

    The Neutron Science Portal at Oak Ridge National Laboratory submits jobs to the TeraGrid for remote job execution. The TeraGrid is a network of high performance computers supported by the US National Science Foundation. There are eleven partner facilities with over a petaflop of peak computing performance and sixty petabytes of long-term storage. Globus is installed on a local machine and used for job submission. The graphical user interface is produced by java coding that reads an XML file. After submission, the status of the job is displayed in a Job Information Service window which queries globus for the status. The output folder produced in the scratch directory of the TeraGrid machine is returned to the portal with globus-url-copy command that uses the gridftp servers on the TeraGrid machines. This folder is copied from the stage-in directory of the community account to the user's results directory where the output can be plotted using the portal's visualization services. The primary problem with remote job execution is diagnosing execution problems. We have daily tests of submitting multiple remote jobs from the portal. When these jobs fail on a computer, it is difficult to diagnose the problem from the globus output. Successes and problems will be presented

  1. A national upgrade of the climate monitoring grid in Sri Lanka. The place of Open Design, OSHW and FOSS.

    Science.gov (United States)

    Chemin, Yann; Bandara, Niroshan; Eriyagama, Nishadi

    2015-04-01

    The National Climate Observatory of Sri lanka is a proposition designed for the Government of Sri Lanka in September and discussed with private and public stakeholders in November 2014. The idea was initially to install a networked grid of weather instruments from locally-made open source hardware technology, on land and seas, that report live the state of climate. After initial stakeholder meetings, it was agreed to first try to connect any existing weather stations from different governmental and private sector agencies. This would bring existing information to a common ground through the Internet. At this point, it was realized that extracting information from various vendors set up would take a large amount of efforts, that is still the best and fastest anyway, as considerations from ownership and maintenance are the most important issues in a tropical humid country as Sri Lanka. Thus, the question of Open Design, open source hardware (OSHW) and free and open source software (FOSS) became a pivotal element in considering operationalization of any future elements of a national grid. Reasons range from ownership, to low-cost and customization, but prominently it is about technology ownership, royalty-free and local availability. Building on previous work from (Chemin and Bandara, 2014) we proposed to open design specifications and prototypes for weather monitoring for various kinds of needs, the Meteorological Department clearly specified that the highest variability observed spatially in Sri Lanka is rainfall, and their willingness to investigate OSHW electronics using their new team of electronics and sensors specialists. A local manufacturer is providing an OSHW micro-controller product, a start up is providing additional sensor boards under OSHW specifications and local manufacture of the sensors (tipping-bucket and other wind sensors) is under development and blueprints have been made available in the Public Domain for CNC machine, 3D printing or Plastic

  2. Job prioritization in LHCb

    CERN Document Server

    Castellani, G

    2007-01-01

    LHCb is one of the four high-energy experiments running in the near future at the Large Hadron Collider (LHC) at CERN. LHCb will try to answer some fundamental questions about the asymmetry between matter and anti-matter. The experiment is expected to produce about 2PB of data per year. Those will be distributed to several laboratories all over Europe and then analyzed by the Physics community. To achieve this target LHCb fully uses the Grid to reprocess, replicate and analyze data. The access to the Grid happens through LHCb's own distributed production and analysis system, DIRAC (Distributed Infrastructure with Remote Agent Control). Dirac implements the ‘pull’ job scheduling paradigm, where all the jobs are stored in a central task queues and then pulled via generic grid jobs called Pilot Agents. The whole LHCb community (about 600 people) is divided in sets of physicists, developers, production and software managers that have different needs about their jobs on the Grid. While a Monte Carlo simulation...

  3. Optimizing Resource Utilization in Grid Batch Systems

    International Nuclear Information System (INIS)

    Gellrich, Andreas

    2012-01-01

    On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.

  4. Grid Collector: Facilitating Efficient Selective Access from DataGrids

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Gu, Junmin; Lauret, Jerome; Poskanzer, Arthur M.; Shoshani, Arie; Sim, Alexander; Zhang, Wei-Ming

    2005-05-17

    The Grid Collector is a system that facilitates the effective analysis and spontaneous exploration of scientific data. It combines an efficient indexing technology with a Grid file management technology to speed up common analysis jobs on high-energy physics data and to enable some previously impractical analysis jobs. To analyze a set of high-energy collision events, one typically specifies the files containing the events of interest, reads all the events in the files, and filters out unwanted ones. Since most analysis jobs filter out significant number of events, a considerable amount of time is wasted by reading the unwanted events. The Grid Collector removes this inefficiency by allowing users to specify more precisely what events are of interest and to read only the selected events. This speeds up most analysis jobs. In existing analysis frameworks, the responsibility of bringing files from tertiary storages or remote sites to local disks falls on the users. This forces most of analysis jobs to be performed at centralized computer facilities where commonly used files are kept on large shared file systems. The Grid Collector automates file management tasks and eliminates the labor-intensive manual file transfers. This makes it much easier to perform analyses that require data files on tertiary storages and remote sites. It also makes more computer resources available for analysis jobs since they are no longer bound to the centralized facilities.

  5. Integrating job scheduling and constrained network routing

    DEFF Research Database (Denmark)

    Gamst, Mette

    2010-01-01

    This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number...

  6. Integration of Grid and Local Batch Resources at DESY

    Science.gov (United States)

    Beyer, Christoph; Finnern, Thomas; Gellrich, Andreas; Hartmann, Thomas; Kemp, Yves; Lewendel, Birgit

    2017-10-01

    As one of the largest resource centres DESY has to support differing work flows of users from various scientific backgrounds. Users can be for one HEP experiments in WLCG or Belle II as well as local HEP users but also physicists from other fields as photon science or accelerator development. By abandoning specific worker node setups in favour of generic flat nodes with middleware resources provided via CVMFS, we gain flexibility to subsume different use cases in a homogeneous environment. Grid jobs and the local batch system are managed in a HTCondor based setup, accepting pilot, user and containerized jobs. The unified setup allows dynamic re-assignment of resources between the different use cases. Monitoring is implemented on global batch system metrics as well as on a per job level utilizing corresponding cgroup information.

  7. Smart grid

    International Nuclear Information System (INIS)

    Choi, Dong Bae

    2001-11-01

    This book describes press smart grid from basics to recent trend. It is divided into ten chapters, which deals with smart grid as green revolution in energy with introduction, history, the fields, application and needed technique for smart grid, Trend of smart grid in foreign such as a model business of smart grid in foreign, policy for smart grid in U.S.A, Trend of smart grid in domestic with international standard of smart grid and strategy and rood map, smart power grid as infrastructure of smart business with EMS development, SAS, SCADA, DAS and PQMS, smart grid for smart consumer, smart renewable like Desertec project, convergence IT with network and PLC, application of an electric car, smart electro service for realtime of electrical pricing system, arrangement of smart grid.

  8. A security architecture for the ALICE grid services

    CERN Document Server

    Schreiner, Steffen; Buchmann, Johannes; Betev, Latchezar; Grigoras, Alina

    2012-01-01

    Globally distributed research cyberinfrastructures, like the ALICE Grid Services, need to provide traceability and accountability of operations and internal interactions. This document presents a new security architecture for the ALICE Grid Services, allowing to establish non-repudiation with respect to creatorship and ownership of Grid files and jobs. It is based on mutually authenticated and encrypted communication using X.509 Public Key Infrastructure and the Transport Layer Security (TLS) protocol. Introducing certified Grid file entries and signed Grid jobs by implementing a model of Mediated Definite Delegation it allows to establish long-term accountability concerning Grid jobs and files. Initial submissions as well as any alteration of Grid jobs are becoming verifiable and can be traced back to the originator. The architecture has been implemented as a prototype along with the development of a new central Grid middleware, called jAliEn.

  9. The CMS Integration Grid Testbed

    CERN Document Server

    Graham, G E; Aziz, Shafqat; Bauerdick, L.A.T.; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yu-jun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge Luis; Kategari, Suchindra; Couvares, Peter; DeSmet, Alan; Livny, Miron; Roy, Alain; Tannenbaum, Todd; Graham, Gregory E.; Aziz, Shafqat; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yujun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge; Kategari, Suchindra; Couvares, Peter; Smet, Alan De; Livny, Miron; Roy, Alain; Tannenbaum, Todd

    2003-01-01

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distrib ution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuo us two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. ...

  10. Public storage for the Open Science Grid

    International Nuclear Information System (INIS)

    Levshina, T; Guru, A

    2014-01-01

    The Open Science Grid infrastructure doesn't provide efficient means to manage public storage offered by participating sites. A Virtual Organization that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its utilization. The involvement of the production manager, site administrators and VO support personnel is required to allocate or rescind storage space. One of the main requirements for Public Storage implementation is that it should use SRM or GridFTP protocols to access the Storage Elements provided by the OSG Sites and not put any additional burden on sites. By policy, no new services related to Public Storage can be installed and run on OSG sites. Opportunistic users also have difficulties in accessing the OSG Storage Elements during the execution of jobs. A typical users' data management workflow includes pre-staging common data on sites before a job's execution, then storing for a subsequent download to a local institution the output data produced by a job on a worker node. When the amount of data is significant, the only means to temporarily store the data is to upload it to one of the Storage Elements. In order to do that, a user's job should be aware of the storage location, availability, and free space. After a successful data upload, users must somehow keep track of the data's location for future access. In this presentation we propose solutions for storage management and data handling issues in the OSG. We are investigating the feasibility of using the integrated Rule-Oriented Data System developed at RENCI as a front-end service to the OSG SEs. The current architecture, state of deployment and performance test results will be discussed. We will also provide examples of current usage of the system by beta-users.

  11. Public storage for the Open Science Grid

    Science.gov (United States)

    Levshina, T.; Guru, A.

    2014-06-01

    The Open Science Grid infrastructure doesn't provide efficient means to manage public storage offered by participating sites. A Virtual Organization that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its utilization. The involvement of the production manager, site administrators and VO support personnel is required to allocate or rescind storage space. One of the main requirements for Public Storage implementation is that it should use SRM or GridFTP protocols to access the Storage Elements provided by the OSG Sites and not put any additional burden on sites. By policy, no new services related to Public Storage can be installed and run on OSG sites. Opportunistic users also have difficulties in accessing the OSG Storage Elements during the execution of jobs. A typical users' data management workflow includes pre-staging common data on sites before a job's execution, then storing for a subsequent download to a local institution the output data produced by a job on a worker node. When the amount of data is significant, the only means to temporarily store the data is to upload it to one of the Storage Elements. In order to do that, a user's job should be aware of the storage location, availability, and free space. After a successful data upload, users must somehow keep track of the data's location for future access. In this presentation we propose solutions for storage management and data handling issues in the OSG. We are investigating the feasibility of using the integrated Rule-Oriented Data System developed at RENCI as a front-end service to the OSG SEs. The current architecture, state of deployment and performance test results will be discussed. We will also provide examples of current usage of the system by beta-users.

  12. AliEn - GRID application for ALICE Collaboration

    International Nuclear Information System (INIS)

    Zgura, Ion-Sorin

    2003-01-01

    AliEn (ALICE Environment) is a GRID framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed data-sets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging.The ALICE experiment has developed AliEn as an implementation of distributed computing infrastructure needed to simulate, reconstruct and analyze data from the experiment. The sites that belong to the ALICE Virtual Organisation can be seen and used as a single entity - any available node executes jobs and access to logical and datasets is transparent to the user. In developing AliEn common standards and solutions in the form of Open Source components were used. Only 1% (25k physical lines of code in Perl) is native AliEn code while 99% of the code has been imported in form of Open Sources packages and Perl modules. Currently ALICE is using the system for distributed production of Monte Carlo data at over 30 sites on four continents. During the last twelve months more than 30,000 jobs have been successfully run under AliEn control worldwide, totalling 25 CPU years and producing 20 TB of data. The user interface is compatible to EU DataGrid at the level of authentication and job description language. In perspective AliEn will be interfaced to the mainstream Grid infrastructure in HEP and it will remain to serve as interface between ALICE Offline framework and external Grid infrastructure. (authors)

  13. Strengths and weaknesses of temporal stability analysis for monitoring and estimating grid-mean soil moisture in a high-intensity irrigated agricultural landscape

    Science.gov (United States)

    Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.

    2017-01-01

    Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.

  14. Reinforcing user data analysis with Ganga in the LHC era: scalability, monitoring and user-support

    International Nuclear Information System (INIS)

    Elmsheuser, Johannes; Ebke, Johannes; Brochu, Frederic; Dzhunov, Ivan; Kokoszkiewicz, Lukasz; Maier, Andrew; Mościcki, Jakub; Tuckett, David; Vanderster, Daniel; Egede, Ulrik; Reece, Will; Williams, Michael; Jha, Manoj Kumar; Lee, Hurng-Chun; München, Tim; Samset, Bjorn; Slater, Mark

    2011-01-01

    Ganga is a grid job submission and management system widely used in the ATLAS and LHCb experiments and several other communities in the context of the EGEE project. The particle physics communities have entered the LHC operation era which brings new challenges for user data analysis: a strong growth in the number of users and jobs is already noticeable. Current work in the Ganga project is focusing on dealing with these challenges. In recent Ganga releases the support for the pilot job based grid systems Panda and Dirac of the ATLAS and LHCb experiment respectively have been strengthened. A more scalable job repository architecture, which allows efficient storage of many thousands of jobs in XML or several database formats, was recently introduced. A better integration with monitoring systems, including the Dashboard and job execution monitor systems is underway. These will provide comprehensive and easy job monitoring. A simple to use error reporting tool integrated at the Ganga command-line will help to improve user support and debugging user problems. Ganga is a mature, stable and widely-used tool with long-term support from the HEP community. We report on how it is being constantly improved following the user needs for faster and easier distributed data analysis on the grid.

  15. Wireless Communications in Smart Grid

    Science.gov (United States)

    Bojkovic, Zoran; Bakmaz, Bojan

    Communication networks play a crucial role in smart grid, as the intelligence of this complex system is built based on information exchange across the power grid. Wireless communications and networking are among the most economical ways to build the essential part of the scalable communication infrastructure for smart grid. In particular, wireless networks will be deployed widely in the smart grid for automatic meter reading, remote system and customer site monitoring, as well as equipment fault diagnosing. With an increasing interest from both the academic and industrial communities, this chapter systematically investigates recent advances in wireless communication technology for the smart grid.

  16. Automatic Integration Testbeds validation on Open Science Grid

    International Nuclear Information System (INIS)

    Caballero, J; Potekhin, M; Thapa, S; Gardner, R

    2011-01-01

    A recurring challenge in deploying high quality production middleware is the extent to which realistic testing occurs before release of the software into the production environment. We describe here an automated system for validating releases of the Open Science Grid software stack that leverages the (pilot-based) PanDA job management system developed and used by the ATLAS experiment. The system was motivated by a desire to subject the OSG Integration Testbed to more realistic validation tests. In particular those which resemble to every extent possible actual job workflows used by the experiments thus utilizing job scheduling at the compute element (CE), use of the worker node execution environment, transfer of data to/from the local storage element (SE), etc. The context is that candidate releases of OSG compute and storage elements can be tested by injecting large numbers of synthetic jobs varying in complexity and coverage of services tested. The native capabilities of the PanDA system can thus be used to define jobs, monitor their execution, and archive the resulting run statistics including success and failure modes. A repository of generic workflows and job types to measure various metrics of interest has been created. A command-line toolset has been developed so that testbed managers can quickly submit 'VO-like' jobs into the system when newly deployed services are ready for testing. A system for automatic submission has been crafted to send jobs to integration testbed sites, collecting the results in a central service and generating regular reports for performance and reliability.

  17. Automatic Integration Testbeds validation on Open Science Grid

    Science.gov (United States)

    Caballero, J.; Thapa, S.; Gardner, R.; Potekhin, M.

    2011-12-01

    A recurring challenge in deploying high quality production middleware is the extent to which realistic testing occurs before release of the software into the production environment. We describe here an automated system for validating releases of the Open Science Grid software stack that leverages the (pilot-based) PanDA job management system developed and used by the ATLAS experiment. The system was motivated by a desire to subject the OSG Integration Testbed to more realistic validation tests. In particular those which resemble to every extent possible actual job workflows used by the experiments thus utilizing job scheduling at the compute element (CE), use of the worker node execution environment, transfer of data to/from the local storage element (SE), etc. The context is that candidate releases of OSG compute and storage elements can be tested by injecting large numbers of synthetic jobs varying in complexity and coverage of services tested. The native capabilities of the PanDA system can thus be used to define jobs, monitor their execution, and archive the resulting run statistics including success and failure modes. A repository of generic workflows and job types to measure various metrics of interest has been created. A command-line toolset has been developed so that testbed managers can quickly submit "VO-like" jobs into the system when newly deployed services are ready for testing. A system for automatic submission has been crafted to send jobs to integration testbed sites, collecting the results in a central service and generating regular reports for performance and reliability.

  18. The Grid

    CERN Document Server

    Klotz, Wolf-Dieter

    2005-01-01

    Grid technology is widely emerging. Grid computing, most simply stated, is distributed computing taken to the next evolutionary level. The goal is to create the illusion of a simple, robust yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources. This talk will give a short history how, out of lessons learned from the Internet, the vision of Grids was born. Then the extensible anatomy of a Grid architecture will be discussed. The talk will end by presenting a selection of major Grid projects in Europe and US and if time permits a short on-line demonstration.

  19. Inventory PV-monitoring. Inventory and analysis of the monitoring of grid-connected photovoltaic systems in the Netherlands in the period 1995-2000

    International Nuclear Information System (INIS)

    Betcke, J.; Van Dijk, V.; Hirsch, D.

    2002-04-01

    The first aim of the title survey was to list all the activities with regard to the monitoring of photovoltaic (PV) systems that were installed in the Netherlands in the period 1995-2000. In this study 144 monitoring activities are categorized in 110 technical monitoring activities, and 42 market monitoring activities. The second aim of the survey was to analyze the state-of-the-art of monitoring. Activities are categorized under 8 main subjects: (1) energy yields; (2) electrotechnical performance; (3) constructional engineering performance; (4) market; (5) the attitude of participants; (6) ownership aspects; (7) financing aspects; and (8) policy. The third aim was to analyze the position of the Dutch monitoring activities compared to activities in Germany, Switzerland, Japan and the USA [nl

  20. Job Creation and Job Types

    DEFF Research Database (Denmark)

    Kuhn, Johan M.; Malchow-Møller, Nikolaj; Sørensen, Anders

    We extend earlier analyses of the job creation of start-ups vs. established firms by taking into consideration the educational content of the jobs created and destroyed. We define educationspecific measures of job creation and job destruction at the firm level, and we use these to construct...... high-skilled jobs. Moreover, start-ups “only” create around half of the surplus jobs, and even less of the high-skilled surplus jobs. Finally, our approach allows us to characterize and identify differences across industries, educational groups and regions....

  1. Job Creation and Job Types

    DEFF Research Database (Denmark)

    Kuhn, Johan M.; Malchow-Møller, Nikolaj; Sørensen, Anders

    We extend earlier analyses of the job creation of start-ups vs. established firms by taking into consideration the educational content of the jobs created and destroyed. We define educationspecific measures of job creation and job destruction at the firm level, and we use these to construct...... a measure of “surplus job creation” defined as jobs created on top of any simultaneous destruction of similar jobs in incumbent firms in the same region and industry. Using Danish employer-employee data from 2002-7, which identify the start-ups and which cover almost the entire private sector......, these measures allow us to provide a more nuanced assessment of the role of entrepreneurial firms in the job-creation process than previous studies. Our findings show that while start-ups are responsible for the entire overall net job creation, incumbents account for more than a third of net job creation within...

  2. Job Creation and Job Types

    DEFF Research Database (Denmark)

    Kuhn, Johan Moritz; Malchow-Møller, Nikolaj; Sørensen, Anders

    2016-01-01

    We extend earlier analyses of the job creation of start-ups versus established firms by considering the educational content of the jobs created and destroyed. We define education-specific measures of job creation and job destruction at the firm level, and we use these measures to construct...... a measure of “surplus job creation”, defined as jobs created on top of any simultaneous destruction of similar jobs in incumbent firms in the same region and industry. Using Danish employer-employee data from 2002–2007 that identify the start-ups and that cover almost the entire private sector......, these measures allow us to provide a more nuanced assessment of the role of entrepreneurial firms in the job-creation process than in previous studies. Our findings show that although start-ups are responsible for the entire overall net job creation, incumbents account for more than one-third of net job creation...

  3. Operation and extension of the Bavarian state air-hygienic monitoring system and the radioactive nuisance measuring grid in northern Bavaria

    International Nuclear Information System (INIS)

    Munzert, K.

    1994-01-01

    The measuring grid of the Bavarian state air-hygienic monitoring system with, currently, 71 measuring points (Upper and Lower Palatine, Upper, Middle and Lower Franconia) in 35 sites measures nuisances in northern Bavaria. 14 of the sites are also used for measuring radioactivity. The measuring stations are situated above all in areas with a high industrial or residential density (established areas of investigation); but also in areas near the border receiving heavy pollutant freights because of long-range pollutant transport (smog areas in the urban and rural district of Hof, rural district of Wundsiedel) and in areas far afield from industrial zones, measurements are carried out.- At each station, the air-analytical, meteorological and radiological readings are continuously processed by computer into half-hourly, hourly or three-hourly means. (orig./HP) [de

  4. Technology Roadmaps: Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    The development of Technology Roadmaps: Smart Grids -- which the IEA defines as an electricity network that uses digital and other advanced technologies to monitor and manage the transport of electricity from all generation sources to meet the varying electricity demands of end users -- is essential if the global community is to achieve shared goals for energy security, economic development and climate change mitigation. Unfortunately, existing misunderstandings of exactly what smart grids are and the physical and institutional complexity of electricity systems make it difficult to implement smart grids on the scale that is needed. This roadmap sets out specific steps needed over the coming years to achieve milestones that will allow smart grids to deliver a clean energy future.

  5. Monitoring the DIRAC distributed system

    CERN Document Server

    Santinelli, R; Nandakumar, R

    2010-01-01

    DIRAC, the LHCb community Grid solution, is intended to reliably run large data mining activities. The DIRAC system consists of various services (which wait to be contacted to perform actions) and agents (which carry out periodic activities) to direct jobs as required. An important part of ensuring the reliability of the infrastructure is the monitoring and logging of these DIRAC distributed systems. The monitoring is done collecting information from two sources – one is from pinging the services or by keeping track of the regular heartbeats of the agents, and the other from the analysis of the error messages generated both by agents and services and collected by a logging system. This allows us to ensure that the components are running properly and to collect useful information regarding their operations. The process status monitoring is displayed using the SLS sensor mechanism that also automatically allows to plot various quantities and keep a history of the system. A dedicated GridMap interface (Service...

  6. Sensitivity study in order to improve the Mesh-grid in environmental nuclear monitoring system using parallels codes

    Energy Technology Data Exchange (ETDEWEB)

    Serrao, Bruno P.; Schirru, Roberto, E-mail: bruno@lmp.ufrj.br, E-mail: schirru@lmp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engneharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2015-07-01

    All nuclear power plants need some monitoring system in order to monitoring the radioactivity that can be released in the atmosphere in case of accidents. Moreover, this system has also be capable to simulate future releases. For this, these systems calculate the wind field, the quantity of radioactive elements and the dispersion of these elements, around nuclear facilities. Angra 1, 2 and 3 (under construction) Complex site, has 15.75 x 10.75 kilometers (X x Y axis). The z axis is divided in 8 heights. So, the mesh has 23048 cells, each one 250 x 250 meters. This work aims to show the performance of an Environmental Nuclear Monitoring System when working with the cells with 100 x 100 meters and 50 x 50 meters, where the computational effort of this approach will be made using parallels computational programs. (author)

  7. The Grid-distributed data analysis in CMS

    International Nuclear Information System (INIS)

    Fanzago, F.; Farina, F.; Cinquilli, M.; Codispoti, G.; Fanfani, F.; Lacaprara, S.; Miccio, E.; Spiga, S.; Vaandering, E.

    2009-01-01

    The CMS experiment will soon produce a huge amount of data (a few P Bytes per year) that will be distributed and stored in many computing centres spread across the countries participating in the collaboration. Data will be available to the whole CMS physicists: this will be possible thanks to the services provided by supported Grids. Crab is the CMS collaboration tool developed to allow physicists to access and analyze data stored over world-wide sites. It aims to simplify the data discovery process and the jobs creation, execution and monitoring tasks hiding the details related both to Grid infrastructures and CMS analysis framework. We will discuss the recent evolution of this tool from its standalone version up to the client-server architecture adopted for particularly challenging workload volumes and we will report the usage statistics collected from the Crab community, involving so far almost 600 distinct users.

  8. Jobs API

    Data.gov (United States)

    General Services Administration — This Jobs API returns job openings across the federal government and includes all current openings posted on USAJobs.gov that are open to the public and located in...

  9. Scheduling Network Traffic for Grid Purposes

    DEFF Research Database (Denmark)

    Gamst, Mette

    This thesis concerns scheduling of network traffic in grid context. Grid computing consists of a number of geographically distributed computers, which work together for solving large problems. The computers are connected through a network. When scheduling job execution in grid computing, data...... transmission has so far not been taken into account. This causes stability problems, because data transmission takes time and thus causes delays to the execution plan. This thesis proposes the integration of job scheduling and network routing. The scientific contribution is based on methods from operations...... research and consists of six papers. The first four considers data transmission in grid context. The last two solves the data transmission problem, where the number of paths per data connection is bounded from above. The thesis shows that it is possible to solve the integrated job scheduling and network...

  10. Job Satisfaction

    OpenAIRE

    MANDELÍČKOVÁ, Nikola

    2016-01-01

    Bachelor thesis deals with job satisfaction. It is often given to a context with the attitude to work which is very much connected to job satisfaction. Thesis summarises all the pieces of information about job satisfacion, factors that affect it negatively and positively, interconnection of work satisfaction and work motivation, work behaviour and performance of workers, relationship of a man and work and at last general job satisfaction and its individual aspects. In the thesis I shortly pay...

  11. Job Satisfaction

    African Journals Online (AJOL)

    Administrator

    of congruence between the job and the reward that the job provides.1 Job satisfaction can be viewed in the context of two decisions people make about their work in joining and remaining in the organization (decision to feel belonged) and working hard in pursuit of high levels of task performance (decision to perform).1.

  12. Lead grids

    CERN Multimedia

    1974-01-01

    One of the 150 lead grids used in the multiwire proportional chamber g-ray detector. The 0.75 mm diameter holes are spaced 1 mm centre to centre. The grids were made by chemical cutting techniques in the Godet Workshop of the SB Physics.

  13. Acceleration grid

    International Nuclear Information System (INIS)

    Hemmerich, J.; Kupschus, P.; Fraenkle, H.

    1983-01-01

    The acceleration grid is used in nuclear fusion technique as an ion beam grid. It consists of perforated plates at different potentials situated behind one another in the axial movement direction of their through holes. In order to prevent interference in the perforated hole area due to thermal expansion, the perforated plates are fixed with elastic springiness (plate fields) at their edges. (DG) [de

  14. Smart Grid Architectures

    DEFF Research Database (Denmark)

    Dondossola, Giovanna; Terruggia, Roberta; Bessler, Sandford

    2014-01-01

    grids requiring the development of new Information and Communication Technology (ICT) solutions with various degrees of adaptation of the monitoring, communication and control technologies. The costs of ICT based solutions need however to be taken into account, hence it is desirable to work...

  15. Reinforcing User Data Analysis with Ganga in the LHC Era: Scalability, Monitoring and User-support

    CERN Document Server

    Brochu, F; The ATLAS collaboration; Ebke, J; Egede, U; Elmsheuser, J; Jha, M K; Kokoszkiewicz, L; Lee, H C; Maier, A; Moscicki, J; Munchen, T; Reece, W; Samset, B; Slater, M; Tuckett, D; Van der Ster, D; Williams, M

    2010-01-01

    Ganga is a grid job submission and management system widely used in the ATLAS and LHCb experiments and several other communities in the context of the EGEE project. The particle physics communities have entered the LHC operation era which brings new challenges for user data analysis: a strong growth in the number of users and jobs is already noticable. Current work in the Ganga project is focusing on dealing with these challenges. In recent Ganga releases the support for the pilot job based grid systems Panda and Dirac of the ATLAS and LHCb experiment respectively have been strengthened. A more scalable job repository architecture, which allows efficient storage of many thousands of jobs in XML or several database formats, was recently introduced. A better integration with monitoring systems, including the Dashboard and job execution monitor systems is underway. These will provide comprehensive and easy job monitoring. A simple to use error reporting tool integrated at the Ganga command-line will help to impr...

  16. Reinforcing user data analysis with Ganga in the LHC era: scalability, monitoring and user-support.

    CERN Document Server

    Brochu, F; The ATLAS collaboration; Ebke, J; Egede, U; Elmsheuser, J; Jha, M K; Kokoszkiewicz, L; Lee, H C; Maier, A; Moscicki, J; Munchen, T; Reece, W; Samset, B; Slater, M; Tuckett, D; Van der Ster, D; Williams, M

    2011-01-01

    Ganga is a grid job submission and management system widely used in the ATLAS and LHCb experiments and several other communities in the context of the EGEE project. The particle physics communities have entered the LHC operation era which brings new challenges for user data analysis: a strong growth in the number of users and jobs is already noticeable. Current work in the Ganga project is focusing on dealing with these challenges. In recent Ganga releases the support for the pilot job based grid systems Panda and Dirac of the ATLAS and LHCb experiment respectively have been strengthened. A more scalable job repository architecture, which allows efficient storage of many thousands of jobs in XML or several database formats, was recently introduced. A better integration with monitoring systems, including the Dashboard and job execution monitor systems is underway. These will provide comprehensive and easy job monitoring. A simple to use error reporting tool integrated at the Ganga command-line will help to imp...

  17. Job submission and management through web services: the experience with the CREAM service

    Science.gov (United States)

    Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Fina, S. D.; Ronco, S. D.; Dorigo, A.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Sgaravatto, M.; Verlato, M.; Zangrando, L.; Corvo, M.; Miccio, V.; Sciaba, A.; Cesini, D.; Dongiovanni, D.; Grandi, C.

    2008-07-01

    Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of CREAM and ICE in gLite. We also discuss recent work towards enhancing CREAM with a BES and JSDL compliant interface.

  18. Job submission and management through web services: the experience with the CREAM service

    International Nuclear Information System (INIS)

    Aiftimiei, C; Andreetto, P; Bertocco, S; Fina, S D; Ronco, S D; Dorigo, A; Gianelle, A; Marzolla, M; Mazzucato, M; Sgaravatto, M; Verlato, M; Zangrando, L; Corvo, M; Miccio, V; Sciaba, A; Cesini, D; Dongiovanni, D; Grandi, C

    2008-01-01

    Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of CREAM and ICE in gLite. We also discuss recent work towards enhancing CREAM with a BES and JSDL compliant interface

  19. Grid Security

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The aim of Grid computing is to enable the easy and open sharing of resources between large and highly distributed communities of scientists and institutes across many independent administrative domains. Convincing site security officers and computer centre managers to allow this to happen in view of today's ever-increasing Internet security problems is a major challenge. Convincing users and application developers to take security seriously is equally difficult. This paper will describe the main Grid security issues, both in terms of technology and policy, that have been tackled over recent years in LCG and related Grid projects. Achievements to date will be described and opportunities for future improvements will be addressed.

  20. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  1. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  2. Grid Computing

    Science.gov (United States)

    Foster, Ian

    2001-08-01

    The term "Grid Computing" refers to the use, for computational purposes, of emerging distributed Grid infrastructures: that is, network and middleware services designed to provide on-demand and high-performance access to all important computational resources within an organization or community. Grid computing promises to enable both evolutionary and revolutionary changes in the practice of computational science and engineering based on new application modalities such as high-speed distributed analysis of large datasets, collaborative engineering and visualization, desktop access to computation via "science portals," rapid parameter studies and Monte Carlo simulations that use all available resources within an organization, and online analysis of data from scientific instruments. In this article, I examine the status of Grid computing circa 2000, briefly reviewing some relevant history, outlining major current Grid research and development activities, and pointing out likely directions for future work. I also present a number of case studies, selected to illustrate the potential of Grid computing in various areas of science.

  3. Contribution of job-exposure matrices for exposure assessment in occupational safety and health monitoring systems: application from the French national occupational disease surveillance and prevention network.

    Science.gov (United States)

    Florentin, Arnaud; Zmirou-Navier, Denis; Paris, Christophe

    2017-08-01

    To detect new hazards ("signals"), occupational health monitoring systems mostly rest on the description of exposures in the jobs held and on reports by medical doctors; these are subject to declarative bias. Our study aims to assess whether job-exposure matrices (JEMs) could be useful tools for signal detection by improving exposure reporting. Using the French national occupational disease surveillance and prevention network (RNV3P) data from 2001 to 2011, we explored the associations between disease and exposure prevalence for 3 well-known pathology/exposure couples and for one debatable couple. We compared the associations measured when using physicians' reports or applying the JEMs, respectively, for these selected diseases and across non-selected RNV3P population or for cases with musculoskeletal disorders, used as two reference groups; the ratio of exposure prevalences according to the two sources of information were computed for each disease category. Our population contained 58,188 subjects referred with pathologies related to work. Mean age at diagnosis was 45.8 years (95% CI 45.7; 45.9), and 57.2% were men. For experts, exposure ratios increase with knowledge on exposure causality. As expected, JEMs retrieved more exposed cases than experts (exposure ratios between 12 and 194), except for the couple silica/silicosis, but not for the MSD control group (ratio between 0.2 and 0.8). JEMs enhanced the number of exposures possibly linked with some conditions, compared to experts' assessment, relative to the whole database or to a reference group; they are less likely to suffer from declarative bias than reports by occupational health professionals.

  4. NRC Job Code V6060: Extended in-situ and real time monitoring. Task 4: Detection and monitoring of leaks at nuclear power plants external to structures

    Energy Technology Data Exchange (ETDEWEB)

    Sheen, S. H. (Nuclear Engineering Division)

    2012-08-01

    In support of Task 4 of the NRC study on compliance with 10 CFR part 20.1406, minimization of contamination, Argonne National Laboratory (ANL) conducted a one-year scoping study, in concert with a parallel study performed by NRC/NRR staff, on monitoring for leaks at nuclear power plants (NPPs) external to structures. The objective of this task-4 study is to identify and assess those sensors and monitoring techniques for early detection of abnormal radioactive releases from the engineered facility structures, systems and components (SSCs) to the surrounding underground environment in existing NPPs and planned new reactors. As such, methods of interest include: (1) detection of anomalous water content of soils surrounding SSCs, (2) radionuclides contained in the leaking water, and (3) secondary signals such as temperature. ANL work scope includes mainly to (1) identify, in concert with the nuclear industry, the sensors and techniques that have most promise to detect radionuclides and/or associated chemical releases from SSCs of existing NPPs and (2) review and provide comments on the results of the NRC/NRR staff scoping study to identify candidate technologies. This report constitutes the ANL deliverable of the task-4 study. It covers a survey of sensor technologies and leak detection methods currently applied to leak monitoring at NPPs. The survey also provides a technology evaluation that identifies their strength and deficiency based on their detection speed, sensitivity, range and reliability. Emerging advanced technologies that are potentially capable of locating releases, identifying the radionuclides, and estimating their concentrations and distributions are also included in the report along with suggestions of required further research and development.

  5. On gLite WMS/LB monitoring and Management through WMSMonitor

    Energy Technology Data Exchange (ETDEWEB)

    Cesini, D; Dongiovanni, D; Fattibene, E; Ferrari, T, E-mail: daniele.cesini@cnaf.infn.i, E-mail: danilo.dongiovanni@cnaf.infn.i [INFN (Italy)

    2010-04-01

    The Workload Management System is the gLite service supporting the distributed production and analysis activities of various HEP experiments. It is responsible of dispatching computing jobs to remote computing facilities by matching job requirements and the resource status information collected from the Grid information services. Given the distributed and heterogeneous nature of the Grid, the monitoring of the job lifecycle and of the aggregate workflow patterns generated by multiple user communities, and the reliability of the service are of great importance. In this paper we deal with the problem of WMS monitoring and management. We present the architecture and implementation of the WMSMonitor, a tool for WMS monitoring and management, which has been designed to meet the needs of various WMS user categories: administrators, developers, advanced Grid users and performance testers. The tool was successfully deployed to monitor the progress of WMS job submission activities during HEP computing challenges. We also describe how, for each WMS in a cluster, WMSMonitor produces status indexes and a load metric that can be used for automated notification of critical events via Nagios, or for ranking of service instances deployed in load balancing mode.

  6. On gLite WMS/LB monitoring and Management through WMSMonitor

    International Nuclear Information System (INIS)

    Cesini, D; Dongiovanni, D; Fattibene, E; Ferrari, T

    2010-01-01

    The Workload Management System is the gLite service supporting the distributed production and analysis activities of various HEP experiments. It is responsible of dispatching computing jobs to remote computing facilities by matching job requirements and the resource status information collected from the Grid information services. Given the distributed and heterogeneous nature of the Grid, the monitoring of the job lifecycle and of the aggregate workflow patterns generated by multiple user communities, and the reliability of the service are of great importance. In this paper we deal with the problem of WMS monitoring and management. We present the architecture and implementation of the WMSMonitor, a tool for WMS monitoring and management, which has been designed to meet the needs of various WMS user categories: administrators, developers, advanced Grid users and performance testers. The tool was successfully deployed to monitor the progress of WMS job submission activities during HEP computing challenges. We also describe how, for each WMS in a cluster, WMSMonitor produces status indexes and a load metric that can be used for automated notification of critical events via Nagios, or for ranking of service instances deployed in load balancing mode.

  7. Smart grid security innovative solutions for a modernized grid

    CERN Document Server

    Skopik, Florian

    2015-01-01

    The Smart Grid security ecosystem is complex and multi-disciplinary, and relatively under-researched compared to the traditional information and network security disciplines. While the Smart Grid has provided increased efficiencies in monitoring power usage, directing power supplies to serve peak power needs and improving efficiency of power delivery, the Smart Grid has also opened the way for information security breaches and other types of security breaches. Potential threats range from meter manipulation to directed, high-impact attacks on critical infrastructure that could bring down regi

  8. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    A computing grid interconnects resources such as high per- formance computers, scientific databases, and computer- controlled scientific instruments of cooperating organiza- tions each of which is autonomous. It precedes and is quite different from cloud computing, which provides computing resources by vendors to ...

  9. Job Attitudes of Workers with Two Jobs

    Science.gov (United States)

    Zickar, Michael J.; Gibby, Robert E.; Jenny, Tim

    2004-01-01

    This article examines the job attitudes of people who hold more than one job. Satisfaction, stress, and organizational (continuance and affective) commitment were assessed for both primary and secondary jobs for 83 full-time workers who held two jobs concurrently. Consistency between job constructs across jobs was negligible, except for…

  10. Optimal Grid Scheduling Using Improved Artificial Bee Colony Algorithm

    OpenAIRE

    T. Vigneswari; M. A. Maluk Mohamed

    2015-01-01

    Job Scheduling plays an important role for efficient utilization of grid resources available across different domains and geographical zones. Scheduling of jobs is challenging and NPcomplete. Evolutionary / Swarm Intelligence algorithms have been extensively used to address the NP problem in grid scheduling. Artificial Bee Colony (ABC) has been proposed for optimization problems based on foraging behaviour of bees. This work proposes a modified ABC algorithm, Cluster Hete...

  11. TAXONOMY OF OPTIMIZATION APPROACHES OF RESOURCE BROKERS IN DATA GRIDS

    OpenAIRE

    Rafah M. Almuttairi

    2015-01-01

    In Data Grid Architecture, the optimizer or resource broker is a tool used where the decision should be taken. That means optimizer is needed whenever there is a need to determine when or how to acquire the services and/or resources for components in higher level. There are many replica optimizers which used different replica selection proposed and developed by research groups and companies. In Data Grid Job, file access pattern varies between jobs as shown in Figure 1, therefo...

  12. Index Grids - MDC_USNationalGrid

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — The U.S. National Grid is based on universally defined coordinate and grid systems and can, therefore, be easily extended for use world-wide as a universal grid...

  13. Pilot factory - a Condor-based system for scalable Pilot Job generation in the Panda WMS framework

    International Nuclear Information System (INIS)

    Chiu, Po-Hsiang; Potekhin, Maxim

    2010-01-01

    The Panda Workload Management System is designed around the concept of the Pilot Job - a 'smart wrapper' for the payload executable that can probe the environment on the remote worker node before pulling down the payload from the server and executing it. Such design allows for improved logging and monitoring capabilities as well as flexibility in Workload Management. In the Grid environment (such as the Open Science Grid), Panda Pilot Jobs are submitted to remote sites via mechanisms that ultimately rely on Condor-G. As our experience has shown, in cases where a large number of Panda jobs are simultaneously routed to a particular remote site, the increased load on the head node of the cluster, which is caused by the Pilot Job submission, may lead to overall lack of scalability. We have developed a Condor-inspired solution to this problem, which is using the schedd-based glidein, whose mission is to redirect pilots to the native batch system. Once a glidein schedd is installed and running, it can be utilized exactly the same way as local schedds and therefore, from the user's perspective, Pilots thus submitted are quite similar to jobs submitted to the local Condor pool.

  14. Job hazards and job security.

    Science.gov (United States)

    Robinson, J C

    1986-01-01

    This paper studies the link between occupational health hazards and job security. Consistent with the underlying hypothesis that firms utilizing hazardous technologies tend to employ low-skilled workers who can be discharged easily in case of a downturn in business, the analysis indicates that workers in hazardous positions are more likely to face involuntary job loss than are those in safe positions. These workers may be particularly sensitive to political arguments that efforts to reduce exposure to toxins in the workplace and the general environment are responsible for layoffs and plant closures. The paper discusses policy alternatives that could reduce the impact of health regulations on job security.

  15. The open science grid

    International Nuclear Information System (INIS)

    Pordes, Ruth; Petravick, Don; Kramer, Bill; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Wuerthwein, Frank; Foster, Ian; Gardner, Rob; Wilde, Mike; Blatecky, Alan; McGee, John; Quick, Rob

    2007-01-01

    The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support it's use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared and common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org

  16. The Open Science Grid

    CERN Document Server

    Pordes, Ruth; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Wuerthwein, Frank K.; Gardner, Rob; Wilde, Mike; Blatecky, Alan; McGee, John; Quick, Rob

    2007-01-01

    The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support it's use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.

  17. The Open Science Grid

    Energy Technology Data Exchange (ETDEWEB)

    Pordes, Ruth; /Fermilab; Kramer, Bill; Olson, Doug; / /LBL, Berkeley; Livny, Miron; Roy, Alain; /Wisconsin U., Madison; Avery, Paul; /Florida U.; Blackburn, Kent; /Caltech; Wenaus, Torre; /Brookhaven; Wurthwein, Frank; /UC, San Diego; Gardner, Rob; Wilde, Mike; /Chicago U. /Indiana U.

    2007-06-01

    The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support its use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.

  18. Autonomous Energy Grids: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Kroposki, Benjamin D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bernstein, Andrey [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-04

    With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performance while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.

  19. A principled approach to grid middleware

    DEFF Research Database (Denmark)

    Berthold, Jost; Bardino, Jonas; Vinter, Brian

    2011-01-01

    This paper provides an overview of MiG, a Grid middleware for advanced job execution, data storage and group collaboration in an integrated, yet lightweight solution using standard software. In contrast to most other Grid middlewares, MiG is developed with a particular focus on usability and mini...... and minimal system requirements, applying strict principles to keep the middleware free of legacy burdens and overly complicated design. We provide an overview of MiG and describe its features in view of the Grid vision and its relation to more recent cloud computing trends....

  20. HIRENASD NLR grid

    Data.gov (United States)

    National Aeronautics and Space Administration — Structured multiblock grid of HIRENASD wing with medium grid density, about 10 mill grid points, 9.5 mill cells. Starting from coarse AIAA AEPW structured grid,...

  1. Grid pulser

    International Nuclear Information System (INIS)

    Jansweijer, P.P.M.; Es, J.T. van.

    1990-01-01

    This report describes a fast pulse generator. This generator delivers a high-voltage pulse of at most 6000 V with a rise time being smaller than 50 nS. this results in a slew rate of more than 120.000 volts per μS. The pulse generator is used to control the grid of the injector of the electron accelerator MEA. The capacity of this grid is about 60 pF. In order to charge this capacity up to 6000 volts in 50 nS a current of 8 ampere is needed. The maximal pulse length is 50 μS with a repeat frequency of 500 Hz. During this 50 μS the stability of the pulse amplitude is better than 0.1%. (author). 20 figs

  2. The grid

    OpenAIRE

    Morrad, Annie; McArthur, Ian

    2018-01-01

    Project Anywhere Project title: The Grid   Artists: Annie Morrad: Artist/Senior Lecturer, University of Lincoln, School of Film and Media, Lincoln, UK   Dr Ian McArthur: Hybrid Practitioner/Senior Lecturer, UNSW Art & Design, UNSW Australia, Sydney, Australia   Annie Morrad is a London-based artist and musician and senior lecturer at the University of Lincoln, UK. Dr Ian McArthur is a Sydney-based hybrid practitione...

  3. Grid interoperability: joining grid information systems

    International Nuclear Information System (INIS)

    Flechl, M; Field, L

    2008-01-01

    A grid is defined as being 'coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations'. Over recent years a number of grid projects, many of which have a strong regional presence, have emerged to help coordinate institutions and enable grids. Today, we face a situation where a number of grid projects exist, most of which are using slightly different middleware. Grid interoperation is trying to bridge these differences and enable Virtual Organizations to access resources at the institutions independent of their grid project affiliation. Grid interoperation is usually a bilateral activity between two grid infrastructures. Recently within the Open Grid Forum, the Grid Interoperability Now (GIN) Community Group is trying to build upon these bilateral activities. The GIN group is a focal point where all the infrastructures can come together to share ideas and experiences on grid interoperation. It is hoped that each bilateral activity will bring us one step closer to the overall goal of a uniform grid landscape. A fundamental aspect of a grid is the information system, which is used to find available grid services. As different grids use different information systems, interoperation between these systems is crucial for grid interoperability. This paper describes the work carried out to overcome these differences between a number of grid projects and the experiences gained. It focuses on the different techniques used and highlights the important areas for future standardization

  4. Safe Grid

    Science.gov (United States)

    Chow, Edward T.; Stewart, Helen; Korsmeyer, David (Technical Monitor)

    2003-01-01

    The biggest users of GRID technologies came from the science and technology communities. These consist of government, industry and academia (national and international). The NASA GRID is moving into a higher technology readiness level (TRL) today; and as a joint effort among these leaders within government, academia, and industry, the NASA GRID plans to extend availability to enable scientists and engineers across these geographical boundaries collaborate to solve important problems facing the world in the 21 st century. In order to enable NASA programs and missions to use IPG resources for program and mission design, the IPG capabilities needs to be accessible from inside the NASA center networks. However, because different NASA centers maintain different security domains, the GRID penetration across different firewalls is a concern for center security people. This is the reason why some IPG resources are been separated from the NASA center network. Also, because of the center network security and ITAR concerns, the NASA IPG resource owner may not have full control over who can access remotely from outside the NASA center. In order to obtain organizational approval for secured remote access, the IPG infrastructure needs to be adapted to work with the NASA business process. Improvements need to be made before the IPG can be used for NASA program and mission development. The Secured Advanced Federated Environment (SAFE) technology is designed to provide federated security across NASA center and NASA partner's security domains. Instead of one giant center firewall which can be difficult to modify for different GRID applications, the SAFE "micro security domain" provide large number of professionally managed "micro firewalls" that can allow NASA centers to accept remote IPG access without the worry of damaging other center resources. The SAFE policy-driven capability-based federated security mechanism can enable joint organizational and resource owner approved remote

  5. The CrossGrid project

    International Nuclear Information System (INIS)

    Kunze, M.

    2003-01-01

    There are many large-scale problems that require new approaches to computing, such as earth observation, environmental management, biomedicine, industrial and scientific modeling. The CrossGrid project addresses realistic problems in medicine, environmental protection, flood prediction, and physics analysis and is oriented towards specific end-users: Medical doctors, who could obtain new tools to help them to obtain correct diagnoses and to guide them during operations; industries, that could be advised on the best timing for some critical operations involving risk of pollution; flood crisis teams, that could predict the risk of a flood on the basis of historical records and actual hydrological and meteorological data; physicists, who could optimize the analysis of massive volumes of data distributed across countries and continents. Corresponding applications will be based on Grid technology and could be complex and difficult to use: the CrossGrid project aims at developing several tools that will make the Grid more friendly for average users. Portals for specific applications will be designed, that should allow for easy connection to the Grid, create a customized work environment, and provide users with all necessary information to get their job done

  6. FINAL REPORT - CENTER FOR GRID MODERNIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Markiewicz, Daniel R

    2008-06-30

    The objective of the CGM was to develop high-priority grid modernization technologies in advanced sensors, communications, controls and smart systems to enable use of real-time or near real-time information for monitoring, analyzing and managing distribution and transmission grid conditions. The key strategic approach to carry out individual CGM research and development (R&D) projects was through partnerships, primarily with the GridApp™ Consortium utility members.

  7. Pre-employment job orientation seminar - after two years

    International Nuclear Information System (INIS)

    Petullo, C.F.

    1988-01-01

    Formerly, applicants for the Radiation Monitor job were primarily individuals with 4-yr degrees in the sciences or engineering, because of upgraded education requirements. Reynolds found, however, that they lost a very high percentage of such hirees within 4 months after training and going into the field - since they had felt they were hired for a white collar job. Therefore, they instituted the Pre-Employment Job Training Seminar for all applicants - a 40-minute slide presentation showing the Nevada Test Site Terrain; Radiation Monitor job locations such as tunnels, drill rigs, and a decontamination facility; and Radiation Monitors active in their job functions. In other words the earthy side of the job is depicted. Applicants still interested in the job after initial orientation stay for a 3-hour examination. The author expects future hiring at the technician level, with job requirements a high school diploma and science and math skills necessary for good job performance

  8. Tomorrow's Jobs

    Science.gov (United States)

    Today's Education, 1972

    1972-01-01

    Synopsis of Department of Labor projections for coming decade shows continuing growth in professional, service, clerical, sales employment, slower growth rate for craftsmen, mechanics, managers and proprietors with relatively same demand for semi-skilled, laborers and farmers. By 1980 labor force and job seekers will increase approximately 17…

  9. Job autonomy and job satisfaction: new evidence

    OpenAIRE

    J Taylor; S Bradley; A N Nguyen

    2003-01-01

    This paper investigates the impact of perceived job autonomy on job satisfaction. We use the fifth sweep of the National Educational Longitudinal Study (1988-2000), which contains personally reported job satisfaction data for a sample of individuals eight years after the end of compulsory education. After controlling for a wide range of personal and job-related variables, perceived job autonomy is found to be a highly significant determinant of five separate domains of job satisfaction (pay, ...

  10. Integrated Job Scheduling and Network Routing

    DEFF Research Database (Denmark)

    Gamst, Mette; Pisinger, David

    2013-01-01

    -Wolfe decomposition is presented. The pricing problem is the linear multicommodity flow problem defined on a time-space network. Branching strategies are presented for the branchand-price algorithm and three heuristics and an exact solution method are implemented for finding a feasible start solution. Finally......We consider an integrated job scheduling and network routing problem which appears in Grid Computing and production planning. The problem is to schedule a number of jobs at a finite set of machines, such that the overall profit of the executed jobs is maximized. Each job demands a number...... of resources which must be sent to the executing machine through a network with limited capacity. A job cannot start before all of its resources have arrived at the machine. The scheduling problem is formulated as a Mixed Integer Program (MIP) and proved to be NP-hard. An exact solution approach using Dantzig...

  11. Synchronized phasor measurements for smart grids

    CERN Document Server

    Mohanta, D K

    2017-01-01

    The use of advanced technologies such as Phasor Measurement Units (PMUs) have made it possible to transform the power grid to an intelligent Smart Grid with realtime control and monitoring of the system. This book explores the application of PMUs in power systems.

  12. Grid-based Visualization Framework

    Science.gov (United States)

    Thiebaux, M.; Tangmunarunkit, H.; Kesselman, C.

    2003-12-01

    Advances in science and engineering have put high demands on tools for high-performance large-scale visual data exploration and analysis. For example, earthquake scientists can now study earthquake phenomena from first principle physics-based simulations. These simulations can generate large amounts of data, possibly high spatial resolution, and long time series. Single-system visualization software running on commodity machines cannot scale up to the large amounts of data generated by these simulations. To address this problem, we propose a flexible and extensible Grid-based visualization framework for time-critical, interactively controlled visual browsing of spatially and temporally large datasets in a Grid environment. Our framework leverages Grid resources for scalable computation and data storage to maintain performance and interactivity with large visualization jobs. Our framework utilizes Globus Toolkit 2.4 components for security (i.e., GSI), resource allocation and management (i.e., DUROC, GRAM) and communication (i.e., Globus-IO) to couple commodity desktops with remote, scalable storage and computational resources in a Grid for interactive data exploration. There are two major components in this framework---Grid Data Transport (GDT) and the Grid Visualization Utility (GVU). GDT provides libraries for performing parallel data filtering and parallel data exchange among Grid resources. GDT allows arbitrary data filtering to be integrated into the system. It also facilitates multi-tiered pipeline topology construction of compute resources and displays. In addition to scientific visualization applications, GDT can be used to support other applications that require parallel processing and parallel transfer of partial ordered independent files, such as file-set transfer. On top of GDT, we have developed the Grid Visualization Utility (GVU), which is designed to assist visualization dataset management, including file formatting, data transport and automatic

  13. The event notification and alarm system for the Open Science Grid operations center

    Science.gov (United States)

    Hayashi, S.; Teige and, S.; Quick, R.

    2012-12-01

    The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.

  14. GreenView and GreenLand Applications Development on SEE-GRID Infrastructure

    Science.gov (United States)

    Mihon, Danut; Bacu, Victor; Gorgan, Dorian; Mészáros, Róbert; Gelybó, Györgyi; Stefanut, Teodor

    2010-05-01

    The GreenView and GreenLand applications [1] have been developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) FP7 project co-funded by the European Commission [2]. The development of environment applications is a challenge for Grid technologies and software development methodologies. This presentation exemplifies the development of the GreenView and GreenLand applications over the SEE-GRID infrastructure by the Grid Application Development Methodology [3]. Today's environmental applications are used in vary domains of Earth Science such as meteorology, ground and atmospheric pollution, ground metal detection or weather prediction. These applications run on satellite images (e.g. Landsat, MERIS, MODIS, etc.) and the accuracy of output results depends mostly of the quality of these images. The main drawback of such environmental applications regards the need of computation power and storage power (some images are almost 1GB in size), in order to process such a large data volume. Actually, almost applications requiring high computation resources have approached the migration onto the Grid infrastructure. This infrastructure offers the computing power by running the atomic application components on different Grid nodes in sequential or parallel mode. The middleware used between the Grid infrastructure and client applications is ESIP (Environment Oriented Satellite Image Processing Platform), which is based on gProcess platform [4]. In its current format, gProcess is used for launching new processes on the Grid nodes, but also for monitoring the execution status of these processes. This presentation highlights two case studies of Grid based environmental applications, GreenView and GreenLand [5]. GreenView is used in correlation with MODIS (Moderate Resolution Imaging Spectroradiometer) satellite images and meteorological datasets, in order to produce pseudo colored temperature and vegetation maps for different geographical CEE (Central

  15. ATLAS Data-Challenge 1 on NorduGrid

    CERN Document Server

    Eerola, Paule Anna Mari; Smirnova, O.; Ekelof, T.; Ellert, M.; Hansen, J.R.; Nielsen, J.L.; Waananen, A.; Hellman, S.; Konstantinov, A.; Myklebust, T.; Ould-Saada, F.

    2003-01-01

    The first LHC application ever to be executed in a computational Grid environment is the so-called ATLAS Data-Challenge 1, more specifically, the part assigned to the Scandinavian members of the ATLAS Collaboration. Taking advantage of the NorduGrid testbed and tools, physicists from Denmark, Norway and Sweden were able to participate in the overall exercise starting in July 2002 and continuing through the rest of 2002 and the first part of 2003 using solely the NorduGrid environment. This allowed to distribute input data over a wide area, and rely on the NorduGrid resource discovery mechanism to find an optimal cluster for job submission. During the whole Data-Challenge 1, more than 2 TB of input data was processed and more than 2.5 TB of output data was produced by more than 4750 Grid jobs.

  16. The MammoGrid Project Grids Architecture

    CERN Document Server

    McClatchey, Richard; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri; Buncic, Predrag; Clatchey, Richard Mc; Buncic, Predrag; Manset, David; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri

    2003-01-01

    The aim of the recently EU-funded MammoGrid project is, in the light of emerging Grid technology, to develop a European-wide database of mammograms that will be used to develop a set of important healthcare applications and investigate the potential of this Grid to support effective co-working between healthcare professionals throughout the EU. The MammoGrid consortium intends to use a Grid model to enable distributed computing that spans national borders. This Grid infrastructure will be used for deploying novel algorithms as software directly developed or enhanced within the project. Using the MammoGrid clinicians will be able to harness the use of massive amounts of medical image data to perform epidemiological studies, advanced image processing, radiographic education and ultimately, tele-diagnosis over communities of medical "virtual organisations". This is achieved through the use of Grid-compliant services [1] for managing (versions of) massively distributed files of mammograms, for handling the distri...

  17. Smart Grid Enabled EVSE

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2015-01-12

    The combined team of GE Global Research, Federal Express, National Renewable Energy Laboratory, and Consolidated Edison has successfully achieved the established goals contained within the Department of Energy’s Smart Grid Capable Electric Vehicle Supply Equipment funding opportunity. The final program product, shown charging two vehicles in Figure 1, reduces by nearly 50% the total installed system cost of the electric vehicle supply equipment (EVSE) as well as enabling a host of new Smart Grid enabled features. These include bi-directional communications, load control, utility message exchange and transaction management information. Using the new charging system, Utilities or energy service providers will now be able to monitor transportation related electrical loads on their distribution networks, send load control commands or preferences to individual systems, and then see measured responses. Installation owners will be able to authorize usage of the stations, monitor operations, and optimally control their electricity consumption. These features and cost reductions have been developed through a total system design solution.

  18. Multi-purpose grid-tied inverter with smart grid capabilities

    Science.gov (United States)

    Liyanagedera, Chamika Mihiranga

    Distributed energy storages play an important role in increasing the reliability and efficiency of the grid through means of peak load shaving, grid voltage support, and grid frequency support. It is important to have distributed energy storages that can utilize the functionalities of the modern smart grid to operate more effectively. The grid-tied inverter is one of the major components in a distributed energy storage that controls the power transfer between the grid and an energy storage device. In this research, a grid-tied inverter that can be used in distributed energy storage applications was designed, developed, and tested. This grid-tied inverter was designed with the capability to control both reactive and active power flow in either direction. The grid-tied inverter is equipped with communication capabilities so it can be remotely controlled by commands sent through a smart grid network. For demonstrative purposes, a user interface was developed to control and monitor the operation of the grid-tied inverter. Finally the operation of the grid-tied inverter was evaluated in accordance to IEEE 1547, the Standard for Interconnecting Distributed Resources with Electric Power Systems.

  19. Monitoring of computing resource utilization of the ATLAS experiment

    CERN Document Server

    Rousseau, D; The ATLAS collaboration; Vukotic, I; Aidel, O; Schaffer, RD; Albrand, S

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  20. Online Grid Measurement and ENS Detection for PV Inverter Running on Highly Inductive Grid

    DEFF Research Database (Denmark)

    Timbus, Adrian Vasile; Teodorescu, Remus; Blaabjerg, Frede

    2004-01-01

    . This letter gives an overview of the methods used for online grid measurement with PV inverters. Emphasis is placed on a method based on the injection of a noncharacteristic harmonic in the grid. Since this injection is regarded as a disturbance for the grid, different issues, i.e., the influence on total...... harmonic distortion (THD), the accuracy of line impedance measurement and the ENS (German abbreviation of Main Monitoring units with allocated Switching Devices) detection are studied. Laboratory results conducted on an existing PV inverter are presented to demonstrate the behavior of the PV inverter under......Photovoltaic (PV) and other sources of renewable energy are being used increasingly in grid-connected systems, for which stronger power quality requirements are being issued. Continuous grid monitoring should be considered so as to provide safe connections and disconnections from the grid...

  1. Online Grid Measurement and ENS Detection for PV Inverter Running on Highly Inductive Grid

    DEFF Research Database (Denmark)

    Timbus, Adrian Vasile; Teodorescu, Remus; Blaabjerg, Frede

    2004-01-01

    Photovoltaic (PV) and other sources of renewable energy are being used increasingly in grid-connected systems, for which stronger power quality requirements are being issued. Continuous grid monitoring should be considered so as to provide safe connections and disconnections from the grid....... This letter gives an overview of the methods used for online grid measurement with PV inverters. Emphasis is placed on a method based on the injection of a noncharacteristic harmonic in the grid. Since this injection is regarded as a disturbance for the grid, different issues, i.e., the influence on total...... harmonic distortion (THD), the accuracy of line impedance measurement and the ENS (German abbreviation of Main Monitoring units with allocated Switching Devices) detection are studied. Laboratory results conducted on an existing PV inverter are presented to demonstrate the behavior of the PV inverter under...

  2. Grid for Earth Science Applications

    Science.gov (United States)

    Petitdidier, Monique; Schwichtenberg, Horst

    2013-04-01

    The civil society at large has addressed to the Earth Science community many strong requirements related in particular to natural and industrial risks, climate changes, new energies. The main critical point is that on one hand the civil society and all public ask for certainties i.e. precise values with small error range as it concerns prediction at short, medium and long term in all domains; on the other hand Science can mainly answer only in terms of probability of occurrence. To improve the answer or/and decrease the uncertainties, (1) new observational networks have been deployed in order to have a better geographical coverage and more accurate measurements have been carried out in key locations and aboard satellites. Following the OECD recommendations on the openness of research and public sector data, more and more data are available for Academic organisation and SMEs; (2) New algorithms and methodologies have been developed to face the huge data processing and assimilation into simulations using new technologies and compute resources. Finally, our total knowledge about the complex Earth system is contained in models and measurements, how we put them together has to be managed cleverly. The technical challenge is to put together databases and computing resources to answer the ES challenges. However all the applications are very intensive computing. Different compute solutions are available and depend on the characteristics of the applications. One of them is Grid especially efficient for independent or embarrassingly parallel jobs related to statistical and parametric studies. Numerous applications in atmospheric chemistry, meteorology, seismology, hydrology, pollution, climate and biodiversity have been deployed successfully on Grid. In order to fulfill requirements of risk management, several prototype applications have been deployed using OGC (Open geospatial Consortium) components with Grid middleware. The Grid has permitted via a huge number of runs to

  3. Reduction of Topological Connectivity Information in Electric Power Grids

    DEFF Research Database (Denmark)

    Prostejovsky, Alexander; Gehrke, Oliver; Marinelli, Mattia

    2016-01-01

    Electric power distribution grids increasingly use higher levels of monitoring and automation, both dependent on grid topology. However, the total amount of information to adequately describe power grids is vast and needs to be reduced when used locally. This work presents an approach for reducing...

  4. Environmental report 2001 - Verbund Austria Power Grid

    International Nuclear Information System (INIS)

    2002-01-01

    A balance of the environmental activities performed by Verbund Austria Power Grid during 2001 is presented. It comprises which measures were taken to reach their environmental objectives: certification of an environmental management system according to ISO 14001 and EMAS, environmental protection, policies, water and thermoelectric power generation status ( CO 2 , SO 2 , NO x emission monitoring, energy efficiency, replacement of old equipment), reduction of the greenhouse gases emissions and nature conservation. The report is divided in 8 sections: power grid, environmental policy, environmental management, power grid layout, environmental status of the system, introduction of new technologies for environmental monitoring, environmental objectives 2001 - 2002, and data and facts 2001. (nevyjel)

  5. Physicians' Job Satisfaction.

    African Journals Online (AJOL)

    AmL

    job satisfaction. The job demand resource model was used to characterize working conditions by two categories: job demands and job resources. Job demands refer to organizational, physical, psychological or social characteristics of work environment, demanding one's time and cognitive or physical efforts.(5) Examples of ...

  6. Advances in Grid Computing for the FabrIc for Frontier Experiments Project at Fermialb

    Energy Technology Data Exchange (ETDEWEB)

    Herner, K. [Fermilab; Alba Hernandex, A. F. [Fermilab; Bhat, S. [Fermilab; Box, D. [Fermilab; Boyd, J. [Fermilab; Di Benedetto, V. [Fermilab; Ding, P. [Fermilab; Dykstra, D. [Fermilab; Fattoruso, M. [Fermilab; Garzoglio, G. [Fermilab; Kirby, M. [Fermilab; Kreymer, A. [Fermilab; Levshina, T. [Fermilab; Mazzacane, A. [Fermilab; Mengel, M. [Fermilab; Mhashilkar, P. [Fermilab; Podstavkov, V. [Fermilab; Retzke, K. [Fermilab; Sharma, N. [Fermilab; Teheran, J. [Fermilab

    2016-01-01

    The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientic Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of diering size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certicate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have signicantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the eorts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production work ows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular work ows, and support troubleshooting and triage in case of problems. Recently a new certicate management infrastructure called Distributed

  7. Advances in Grid Computing for the Fabric for Frontier Experiments Project at Fermilab

    Science.gov (United States)

    Herner, K.; Alba Hernandez, A. F.; Bhat, S.; Box, D.; Boyd, J.; Di Benedetto, V.; Ding, P.; Dykstra, D.; Fattoruso, M.; Garzoglio, G.; Kirby, M.; Kreymer, A.; Levshina, T.; Mazzacane, A.; Mengel, M.; Mhashilkar, P.; Podstavkov, V.; Retzke, K.; Sharma, N.; Teheran, J.

    2017-10-01

    The Fabric for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certificate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have significantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the efforts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production workflows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular workflows, and support troubleshooting and triage in case of problems. Recently a new certificate management infrastructure called

  8. Cognitive Radio for Smart Grid: Theory, Algorithms, and Security

    OpenAIRE

    Ranganathan, Raghuram; Qiu, Robert; Hu, Zhen; Hou, Shujie; Pazos-Revilla, Marbin; Zheng, Gang; Chen, Zhe; Guo, Nan

    2011-01-01

    Recently, cognitive radio and smart grid are two areas which have received considerable research impetus. Cognitive radios are intelligent software defined radios (SDRs) that efficiently utilize the unused regions of the spectrum, to achieve higher data rates. The smart grid is an automated electric power system that monitors and controls grid activities. In this paper, the novel concept of incorporating a cognitive radio network as the communications infrastructure for the smart grid is pres...

  9. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  10. GANGA powerful job submission and management tool

    CERN Document Server

    Maier, Andrew; Mendez Lorenzo, Patricia; Moscicki, Jakub; Lamanna, Massimo; Muraru, Adrian

    2008-01-01

    The computational and storage capability of the Grid are attracting several research communities, also beyond HEP. Ganga is a lightweight Grid job management tool developed at CERN. It is a key component in the distributed Data Analysis for ATLAS and LHCb. Ganga`s open and general framework allows to plug-in applications, which has attracted users from other domains outside HEP. In addition, Ganga interfaces to a variety of Grid and non-Grid backends using the same, simple end-user interface Ganga has already gained widespread use, the incomplete list of applications using Ganga include: Imaging processing and classification (developed by Cambridge Ontology Ltd.), Theoretical physics (Lattice QCD, Feynman-loop evaluation), Bio-informatics (Avian Flu Data Challenge), Geant4 (Monte Carlo package), HEP data analysis (ATLAS, LHCb). All these communities have different goals and requirements and the main challenge is the creation of a standard and general software infrastructure for the immersion of these communit...

  11. Second Job Entrepreneurs.

    Science.gov (United States)

    Gruenert, Jeffrey C.

    1999-01-01

    Data from the Current Population Survey reveal characteristics of second-job entrepreneurs, occupations in which these workers hold their second jobs, and the occupational and earnings relationships between their second and primary jobs. (Author)

  12. Job Satisfaction and Stress.

    Science.gov (United States)

    Davis, F. William

    1981-01-01

    Sources of job satisfaction and job related stress among public school physical educators are examined. Recommended techniques are offered for physical education administrators to reduce their employees' job-related stress and to improve the quality of worklife. (JN)

  13. Overcoming job stress

    Science.gov (United States)

    ... medlineplus.gov/ency/patientinstructions/000884.htm Overcoming job stress To use the sharing features on this page, ... stay healthy and feel better. Causes of Job Stress Although the cause of job stress is different ...

  14. Job burnout.

    Science.gov (United States)

    Maslach, C; Schaufeli, W B; Leiter, M P

    2001-01-01

    Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job, and is defined by the three dimensions of exhaustion, cynicism, and inefficacy. The past 25 years of research has established the complexity of the construct, and places the individual stress experience within a larger organizational context of people's relation to their work. Recently, the work on burnout has expanded internationally and has led to new conceptual models. The focus on engagement, the positive antithesis of burnout, promises to yield new perspectives on interventions to alleviate burnout. The social focus of burnout, the solid research basis concerning the syndrome, and its specific ties to the work domain make a distinct and valuable contribution to people's health and well-being.

  15. Parallel Monte Carlo simulations on an ARC-enabled computing grid

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Samset, Bjørn H

    2011-01-01

    Grid computing opens new possibilities for running heavy Monte Carlo simulations of physical systems in parallel. The presentation gives an overview of GaMPI, a system for running an MPI-based random walker simulation on grid resources. Integrating the ARC middleware and the new storage system Chelonia with the Ganga grid job submission and control system, we show that MPI jobs can be run on a world-wide computing grid with good performance and promising scaling properties. Results for relatively communication-heavy Monte Carlo simulations run on multiple heterogeneous, ARC-enabled computing clusters in several countries are presented.

  16. Mapping of grid faults and grid codes

    DEFF Research Database (Denmark)

    Iov, F.; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    The present report is a part of the research project ''Grid fault and designbasis for wind turbine'' supported by Energinet.dk through the grant PSO F&U 6319. The objective of this project is to investigate into the consequences of the new grid connection requirements for the fatigue and extreme...... for such investigations. The grid connection requirements for wind turbines have increased significantly during the last 5-10 years. Especially the requirements for wind turbines to stay connected to the grid during and after voltage sags, imply potential challenges in the design of wind turbines. These requirements pose...... challenges for the design of both the electrical system and the mechanical structure of wind turbines. An overview over the frequency of grid faults and the grid connection requirements in different relevant countries is done in this report. The most relevant study cases for the quantification of the loads...

  17. Evolution of grid-wide access to database resident information in ATLAS using Frontier

    CERN Document Server

    Barberis, D; The ATLAS collaboration; de Stefano, J; Dewhurst, A L; Dykstra, D; Front, D

    2012-01-01

    The ATLAS experiment deployed Frontier technology world-wide during the the initial year of LHC collision data taking to enable user analysis jobs running on the World-wide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this presentation we focus on the specific changes in the deployment and improvements undertaken such as the optimization of cache and launchpad location, the use of RPMs for more uniform deployment of underlying Frontier related components, improvements in monitoring, optimization of fail-over, and an increasing use of a centrally managed database containing site specific information (for configuration of services and monitoring). In addition, analysis of Frontier logs has allowed us a deeper understanding of problematic queries and understanding of use cases. Use of the system has grown beyond just user analysis and subsyste...

  18. LHCb: Monitoring the DIRAC Distribution System

    CERN Multimedia

    Nandakumar, R; Santinelli, R

    2009-01-01

    DIRAC is the LHCb gateway to any computing grid infrastructure (currently supporting WLCG) and is intended to reliably run large data mining activities. The DIRAC system consists of various services (which wait to be contacted to perform actions) and agents (which carry out periodic activities) to direct jobs as required. An important part of ensuring the reliability of the infrastructure is the monitoring and logging of these DIRAC distributed systems. The monitoring is done collecting information from two sources - one is from pinging the services or by keeping track of the regular heartbeats of the agents, and the other from the analysis of the error messages generated by both agents and services and collected by the logging system. This allows us to ensure that he components are running properly and to collect useful information regarding their operations. The process status monitoring is displayed using the SLS sensor mechanism which also automatically allows one to plot various quantities and also keep ...

  19. Porting of Scientific Applications to Grid Computing on GridWay

    Directory of Open Access Journals (Sweden)

    J. Herrera

    2005-01-01

    Full Text Available The expansion and adoption of Grid technologies is prevented by the lack of a standard programming paradigm to port existing applications among different environments. The Distributed Resource Management Application API has been proposed to aid the rapid development and distribution of these applications across different Distributed Resource Management Systems. In this paper we describe an implementation of the DRMAA standard on a Globus-based testbed, and show its suitability to express typical scientific applications, like High-Throughput and Master-Worker applications. The DRMAA routines are supported by the functionality offered by the GridWay2 framework, which provides the runtime mechanisms needed for transparently executing jobs on a dynamic Grid environment based on Globus. As cases of study, we consider the implementation with DRMAA of a bioinformatics application, a genetic algorithm and the NAS Grid Benchmarks.

  20. Processing of the WLCG monitoring data using NoSQL

    Science.gov (United States)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  1. Processing of the WLCG monitoring data using NoSQL

    International Nuclear Information System (INIS)

    Andreeva, J; Beche, A; Karavakis, E; Saiz, P; Tuckett, D; Belov, S; Kadochnikov, I; Schovancova, J; Dzhunov, I

    2014-01-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  2. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  3. Analysis gets on the starting Grid

    CERN Multimedia

    Roger Jones

    It is vital for ATLAS to have a functioning distributed analysis system to analyse its data. There are three major Grid deployments in ATLAS (Enabling Grids for E-sciencE, EGEE; the US Open Science Grid, OSG; and the Nordic DataGrid Facility, NGDF), and our data and jobs need to work across all of them, as well as on local machines and batch systems. Users must also be able to locate the data they want and register new small datasets so they can be used later. ATLAS has a suite of products to meet these needs, and a series of Distributed Analysis tutorials are training an increasing number of brave early adopters to use the system. Real users are vital to make sure that the tools are fit for their purpose and to refine our computing model. One such tutorial happened on the 1st and 2nd February at the National eScience Centre in Edinburgh, UK, sponsored by the GridPP Collaboration. The first day introduced an international set of tutees to the basic tools for Grid-based distributed analysis. The architecture...

  4. Grid-system element of LCG-2 in DLNP

    International Nuclear Information System (INIS)

    Dolbilov, A.G.; Ivanov, Yu.P.

    2008-01-01

    We present here the description of the computer cluster at the Dzhelepov Laboratory of Nuclear Problems of JINR, where the second Grid note at JINR was realized. The configuration of the system, which allows effective joint usage of cluster resources both for local users and for others within the framework of the ATLAS collaboration is examined. Examples are given for basic stages of preparing and running ordinary cluster jobs and with the Grid usages, starting from obtaining CA certificates, submitting jobs and retrieving the results. Perspectives of the cluster upgrade are discussed

  5. Smart grid security

    CERN Document Server

    Goel, Sanjay; Papakonstantinou, Vagelis; Kloza, Dariusz

    2015-01-01

    This book on smart grid security is meant for a broad audience from managers to technical experts. It highlights security challenges that are faced in the smart grid as we widely deploy it across the landscape. It starts with a brief overview of the smart grid and then discusses some of the reported attacks on the grid. It covers network threats, cyber physical threats, smart metering threats, as well as privacy issues in the smart grid. Along with the threats the book discusses the means to improve smart grid security and the standards that are emerging in the field. The second part of the b

  6. CMS on the GRID: Toward a fully distributed computing architecture

    International Nuclear Information System (INIS)

    Innocente, Vincenzo

    2003-01-01

    The computing systems required to collect, analyse and store the physics data at LHC would need to be distributed and global in scope. CMS is actively involved in several grid-related projects to develop and deploy a fully distributed computing architecture. We present here recent developments of tools for automating job submission and for serving data to remote analysis stations. Plans for further test and deployment of a production grid are also described

  7. What could make 2017 the banner year for smart grids?

    International Nuclear Information System (INIS)

    Ortega, Florian

    2015-01-01

    With billings slated to reach 6 euros billion per year by 2020, intelligent networks, known as smart grids are an attractive proposition for many companies and will generate up to 25 000 jobs directly in France. While it seems, in light of all the commitments that have been made, that 2017 can considered as 'the year of the smart grids', there remain a number of uncertainties. (author)

  8. National Smart Water Grid

    Energy Technology Data Exchange (ETDEWEB)

    Beaulieu, R A

    2009-07-13

    The United States repeatedly experiences floods along the Midwest's large rivers and droughts in the arid Western States that cause traumatic environmental conditions with huge economic impact. With an integrated approach and solution these problems can be alleviated. Tapping into the Mississippi River and its tributaries, the world's third largest fresh water river system, during flood events will mitigate the damage of flooding and provide a new source of fresh water to the Western States. The trend of increased flooding on the Midwest's large rivers is supported by a growing body of scientific literature. The Colorado River Basin and the western states are experiencing a protracted multi-year drought. Fresh water can be pumped via pipelines from areas of overabundance/flood to areas of drought or high demand. Calculations document 10 to 60 million acre-feet (maf) of fresh water per flood event can be captured from the Midwest's Rivers and pumped via pipelines to the Colorado River and introduced upstream of Lake Powell, Utah, to destinations near Denver, Colorado, and used in areas along the pipelines. Water users of the Colorado River include the cities in southern Nevada, southern California, northern Arizona, Colorado, Utah, Indian Tribes, and Mexico. The proposed start and end points, and routes of the pipelines are documented, including information on right-of-ways necessary for state and federal permits. A National Smart Water Grid{trademark} (NSWG) Project will create thousands of new jobs for construction, operation, and maintenance and save billions in drought and flood damage reparations tax dollars. The socio-economic benefits of NWSG include decreased flooding in the Midwest; increased agriculture, and recreation and tourism; improved national security, transportation, and fishery and wildlife habitats; mitigated regional climate change and global warming such as increased carbon capture; decreased salinity in Colorado River water

  9. LTE delay assessment for real-time management of future smart grids

    NARCIS (Netherlands)

    Jorguseski, L.; Zhang, H.; Chrysalos, M.; Golinski, M.; Toh, Y.

    2017-01-01

    This study investigates the feasibility of using Long Term Evolution (LTE), for the real-time state estimation of the smart grids. This enables monitoring and control of future smart grids. The smart grid state estimation requires measurement reports from different nodes in the smart grid and

  10. On-the-job-training, job search and job mobility

    OpenAIRE

    Josef Zweimüller; Rudolf Winter-Ebmer

    2003-01-01

    This paper analyzes the impact of formal training on worker mobility. Using data from the Swiss Labor Force Survey, we find that both general and specific training significantly affects on-the-job search activities. The effect of training on actual job mobility differs between searchers and non-searchers. In line with human capital theory, we find that specific (general) training has a negative (positive) impact on job mobility for previous non-searchers. For individuals who have been looking...

  11. Mapping of grid faults and grid codes

    DEFF Research Database (Denmark)

    Iov, Florin; Hansen, A.D.; Sørensen, P.

    The present report is a part of the research project "Grid fault and design basis for wind turbine" supported by Energinet.dk through the grant PSO F&U 6319. The objective of this project is to investigate into the consequences of the new grid connection requirements for the fatigue and extreme...... challenges for the design of both the electrical system and the mechanical structure of wind turbines. An overview over the frequency of grid faults and the grid connection requirements in different relevant countries is done in this report. The most relevant study cases for the quantification of the loads......' impact on the wind turbines' lifetime are defined. The goal of this report is to present a mapping of different grid fault types and their frequency in different countries. The report provides also a detailed overview of the Low Voltage Ride-Through Capabilities for wind turbines in different relevant...

  12. Job resources in Dutch dental practice.

    Science.gov (United States)

    Gorter, R C; te Brake, J H M; Eijkman, M A J; Hoogstraten, Joh

    2006-02-01

    To develop an instrument measuring job resources among dentists, and to assess the relative importance of these resources and relate them to job satisfaction. 848 Dutch general dental practitioners (GDPs) received a questionnaire to monitor work experiences, including the Dentists' Experienced Job Resources Scale (DEJRS, 46 items, score range: 1 (not satisfying) to 5 (very satisfying), and the Dentist Job Satisfaction Scale (DJSS, 5 items, Cronbach's alpha = 0.85). A total of 497 (58.6%) dentists responded. Factor analysis (PCA) on the DEJRS resulted in 8 factors (Cronbach's alpha: 0.75 > alpha Entrepreneurship (M = 3.55, sd = 0.9); Material Benefits (M = 3.05, sd = 0.7); and Professional Contacts (M = 3.03, sd = 0.7). MANOVA indicated gender differences on: (Long-term) Patient Results (F(1,548) = 10.428, p = .001), and Patient Care (F(1,548) = 11.036, p r < 0.88. All subscales show a positive correlation with the DJSS. The DEJRS is a valuable and psychometrically sound instrument to monitor job resources as experienced by GDPs. Dentists report immediate results and aesthetics, and long-term results of working with patients to be the most rewarding aspects. All job resources showed a positive correlation with job satisfaction. The discussion includes conjecture that stimulating a greater awareness of job resources serves a major role in burnout prevention.

  13. HIRENASD coarse structured grid

    Data.gov (United States)

    National Aeronautics and Space Administration — blockstructured hexahedral grid, 6.7 mio elements, 24 degree minimum grid angle, CGNS format version 2.4, double precision Binary, Plot3D file Please contact...

  14. Greedy and metaheuristics for the offline scheduling problem in grid computing

    DEFF Research Database (Denmark)

    Gamst, Mette

    In grid computing a number of geographically distributed resources connected through a wide area network, are utilized as one computations unit. The NP-hard offline scheduling problem in grid computing consists of assigning jobs to resources in advance. In this paper, five greedy heuristics and two....... All heuristics solve instances with up to 2000 jobs and 1000 resources, thus the results are useful both with respect to running times and to solution values....

  15. Smart grid in China

    DEFF Research Database (Denmark)

    Sommer, Simon; Ma, Zheng; Jørgensen, Bo Nørregaard

    2015-01-01

    China is planning to transform its traditional power grid in favour of a smart grid, since it allows a more economically efficient and a more environmentally friendly transmission and distribution of electricity. Thus, a nationwide smart grid is likely to save tremendous amounts of resources...

  16. Design and implementation of a scalable monitor system (IF-monitor) for Linux clusters

    International Nuclear Information System (INIS)

    Zhang Weiyi; Yu Chuansong; Sun Gongxing; Gu Ming

    2003-01-01

    PC clusters have become a cost-effective solution for high performance computing, usually only with the abilities of resource management and job scheduling, and unfortunately, with lack of powerful monitoring for built PC Farms. Therefore it is like a 'black box' for administrators who don't know how they run and where the bottlenecks are. In present there are a few of running PC Farms such as BES-Farm, LHC-Farm, YBJ-Farm at IHEP, CAS. As the scale of PC Farms growing and the IHEP campus grid computing environment implemented, it is more difficult to predict how these PC Farms perform. As a result, the SNMP-based tool called IF-Monitor that allows effective monitoring of large clusters have been designed and developed at IHEP. (authors)

  17. Smart Grid Communications System Blueprint

    Science.gov (United States)

    Clark, Adrian; Pavlovski, Chris

    2010-10-01

    Telecommunications operators are well versed in deploying 2G and 3G wireless networks. These networks presently support the mobile business user and/or retail consumer wishing to place conventional voice calls and data connections. The electrical power industry has recently commenced transformation of its distribution networks by deploying smart monitoring and control devices throughout their networks. This evolution of the network into a `smart grid' has also motivated the need to deploy wireless technologies that bridge the communication gap between the smart devices and information technology systems. The requirements of these networks differ from traditional wireless networks that communications operators have deployed, which have thus far forced energy companies to consider deploying their own wireless networks. We present our experience in deploying wireless networks to support the smart grid and highlight the key properties of these networks. These characteristics include application awareness, support for large numbers of simultaneous cell connections, high service coverage and prioritized routing of data. We also outline our target blueprint architecture that may be useful to the industry in building wireless and fixed networks to support the smart grid. By observing our experiences, telecommunications operators and equipment manufacturers will be able to augment their current networks and products in a way that accommodates the needs of the emerging industry of smart grids and intelligent electrical networks.

  18. Relationship Of Core Job Characteristics To Job Satisfaction And ...

    African Journals Online (AJOL)

    In order to clarify the conceptual and empirical distinction between job satisfaction and job involvement constructs, this study investigates the relationship between construction workers core job characteristics, job satisfaction and job involvement. It also investigates the mediating role of job satisfaction between core job ...

  19. The design and development of massive BES job submit and management system

    International Nuclear Information System (INIS)

    Shi Jingyan; Liang Dong; Sun Gongxing; Chen Gang

    2010-01-01

    The system was designed to provide an easy and efficient way for the physicists to run their physical jobs. The system sends jobs to the different computing backend under the request of the user, besides, the system can monitor the jobs status, re-submit the job automatically. The BES job is the typical data massive calculation. To realize the parallelized job running, the big job was split into many sub-jobs to be run on many worknodes at the same time. Web Service is adopted to provide users flexible interface. (authors)

  20. Monitoring Satellite Data Ingest and Processing for the Atmosphere Science Investigator-led Processing Systems (SIPS)

    Science.gov (United States)

    Witt, J.; Gumley, L.; Braun, J.; Dutcher, S.; Flynn, B.

    2017-12-01

    The Atmosphere SIPS (Science Investigator-led Processing Systems) team at the Space Science and Engineering Center (SSEC), which is funded through a NASA contract, creates Level 2 cloud and aerosol products from the VIIRS instrument aboard the S-NPP satellite. In order to monitor the ingest and processing of files, we have developed an extensive monitoring system to observe every step in the process. The status grid is used for real time monitoring, and shows the current state of the system, including what files we have and whether or not we are meeting our latency requirements. Our snapshot tool displays the state of the system in the past. It displays which files were available at a given hour and is used for historical and backtracking purposes. In addition to these grid like tools we have created histograms and other statistical graphs for tracking processing and ingest metrics, such as total processing time, job queue time, and latency statistics.

  1. Running and testing GRID services with Puppet at GRIF- IRFU

    Science.gov (United States)

    Ferry, S.; Schaer, F.; Meyer, JP

    2015-12-01

    GRIF is a distributed Tiers 2 centre, made of 6 different centres in the Paris region, and serving many VOs. The sub-sites are connected with 10 Gbps private network and share tools for central management. One of the sub-sites, GRIF-IRFU held and maintained in the CEA- Saclay centre, moved a year ago, to a configuration management using Puppet. Thanks to the versatility of Puppet/Foreman automation, the GRIF-IRFU site maintains usual grid services, with, among them: a CREAM-CE with a TORQUE+Maui (running a batch with more than 5000 jobs slots), a DPM storage of more than 2 PB, a Nagios monitoring essentially based on check_mk, as well as centralized services for the French NGI, like the accounting, or the argus central suspension system. We report on the actual functionalities of Puppet and present the last tests and evolutions including a monitoring with Graphite, a HT-condor multicore batch accessed with an ARC-CE and a CEPH storage file system.

  2. Job Satisfaction and Job Performance at the Work Place

    OpenAIRE

    Vanden Berghe, Jae Hyung

    2011-01-01

    The topic of the thesis is job satisfaction and job performance at the work place. The aim is to define the determinants for job satisfaction and to investigate the relationship between job satisfaction and job performance and the influence of job satisfaction on job performance. First we look into the Theory of Reasoned Action and the Theory of Planned Behaviour to account for the relationship between attitudes and behaviour. Job satisfaction is then explained as a function of job feature...

  3. Grid Architecture 2

    Energy Technology Data Exchange (ETDEWEB)

    Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  4. Domestic Job Shortage or Job Maldistribution? A Geographic Analysis of the Current Radiation Oncology Job Market.

    Science.gov (United States)

    Chowdhary, Mudit; Chhabra, Arpit M; Switchenko, Jeffrey M; Jhaveri, Jaymin; Sen, Neilayan; Patel, Pretesh R; Curran, Walter J; Abrams, Ross A; Patel, Kirtesh R; Marwaha, Gaurav

    2017-09-01

    as a significant regional imbalance of academic versus nonacademic jobs. Long-term monitoring is required to confirm these results. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Smart grid technologies in local electric grids

    Science.gov (United States)

    Lezhniuk, Petro D.; Pijarski, Paweł; Buslavets, Olga A.

    2017-08-01

    The research is devoted to the creation of favorable conditions for the integration of renewable sources of energy into electric grids, which were designed to be supplied from centralized generation at large electric power stations. Development of distributed generation in electric grids influences the conditions of their operation - conflict of interests arises. The possibility of optimal functioning of electric grids and renewable sources of energy, when complex criterion of the optimality is balance reliability of electric energy in local electric system and minimum losses of electric energy in it. Multilevel automated system for power flows control in electric grids by means of change of distributed generation of power is developed. Optimization of power flows is performed by local systems of automatic control of small hydropower stations and, if possible, solar power plants.

  6. A Development of Lightweight Grid Interface

    International Nuclear Information System (INIS)

    Iwai, G; Kawai, Y; Sasaki, T; Watase, Y

    2011-01-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  7. DZero data-intensive computing on the Open Science Grid

    Energy Technology Data Exchange (ETDEWEB)

    Abbott, B.; /Oklahoma U.; Baranovski, A.; Diesburg, M.; Garzoglio, G.; /Fermilab; Kurca, T.; /Lyon, IPN; Mhashilkar, P.; /Fermilab

    2007-09-01

    High energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project.

  8. Wind Turbine Providing Grid Support

    DEFF Research Database (Denmark)

    2011-01-01

    A variable speed wind turbine is arranged to provide additional electrical power to counteract non-periodic disturbances in an electrical grid. A controller monitors events indicating a need to increase the electrical output power from the wind turbine to the electrical grid. The controller...... is arranged to control the wind turbine as follows: after an indicating event has been detected, the wind turbine enters an overproduction period in which the electrical output power is increased, wherein the additional electrical output power is taken from kinetic energy stored in the rotor and without...... changing the operation of the wind turbine to a more efficient working point.; When the rotational speed of the rotor reaches a minimum value, the wind turbine enters a recovery period to re-accelerate the rotor to the nominal rotational speed while further contributing to the stability of the electrical...

  9. Monte Carlo Production on the Grid by the H1 Collaboration

    Science.gov (United States)

    Bystritskaya, E.; Fomenko, A.; Gogitidze, N.; Lobodzinski, B.

    2012-12-01

    The High Energy Physics Experiment H1 [1] at Hadron-Electron Ring Accelerator (HERA) at DESY [2] is now in the era of high precision analyses based on the final and complete data sample. A natural consequence of this is the huge increase in the requirement for simulated Monte Carlo (MC) events. As a response to this increase, a framework for large scale MC production using the LCG Grid Infrastructure was developed. After 3 years of development the H1 MC Computing Framework has become a platform of high performance, reliability and robustness, operating on the top of the gLite infrastructure. The original framework has been expanded into a tool which can handle 600 million simulated MC events per month and 20,000 simultaneously supported jobs on the LHC Grid, at the same time decreasing operator effort to a minimum. An annual MC event production rate of over 2.5 billion events has been achieved, and the project is integral to the physics analysis performed by H1. Tools have also been developed, which allow modifications both of the H1 detector details, of the different levels of the MC production steps and which permit full monitoring of the jobs on the Grid sites. Based on the experience gained during the successful MC simulation, the H1 MC Framework is described. Also addressed are reasons for failures, deficiencies, bottlenecks and scaling boundaries as observed during this full scale physics analysis endeavor. The found solutions can easily be implemented by other experiments, and not necessarily only those devoted to HEP.

  10. Monte Carlo Production on the Grid by the H1 Collaboration

    International Nuclear Information System (INIS)

    Bystritskaya, E; Fomenko, A; Gogitidze, N; Lobodzinski, B

    2012-01-01

    The High Energy Physics Experiment H1 at Hadron-Electron Ring Accelerator (HERA) at DESY is now in the era of high precision analyses based on the final and complete data sample. A natural consequence of this is the huge increase in the requirement for simulated Monte Carlo (MC) events. As a response to this increase, a framework for large scale MC production using the LCG Grid Infrastructure was developed. After 3 years of development the H1 MC Computing Framework has become a platform of high performance, reliability and robustness, operating on the top of the gLite infrastructure. The original framework has been expanded into a tool which can handle 600 million simulated MC events per month and 20,000 simultaneously supported jobs on the LHC Grid, at the same time decreasing operator effort to a minimum. An annual MC event production rate of over 2.5 billion events has been achieved, and the project is integral to the physics analysis performed by H1. Tools have also been developed, which allow modifications both of the H1 detector details, of the different levels of the MC production steps and which permit full monitoring of the jobs on the Grid sites. Based on the experience gained during the successful MC simulation, the H1 MC Framework is described. Also addressed are reasons for failures, deficiencies, bottlenecks and scaling boundaries as observed during this full scale physics analysis endeavor. The found solutions can easily be implemented by other experiments, and not necessarily only those devoted to HEP.

  11. Assessment of distribution grid voltage control strategies in view of deployment

    DEFF Research Database (Denmark)

    Han, Xue; Kosek, Anna Magdalena; Bondy, Daniel Esteban Morales

    2014-01-01

    Increasing integration of distributed energy resources (DER) and available monitoring devices in the power distribution grid make system services provided by DERs possible and an integral part of distribution grid operation. Numerous publications have proposed various control solutions by utilizi...

  12. Application of optical fiber sensors in Smart Grid

    Science.gov (United States)

    Zhang, Ruirui

    2013-12-01

    Smart Grid is a promising power delivery infrastructure integrated with communication and information technologies. By incorporating monitoring, analysis, control and communications facilities, it is possible to optimize the performance of the power system, allowing electricity to be delivered more efficiently. In the transmission and distribution sector, online monitoring of transmission lines and primary equipments is of vital importance, which can improve the reliability of power systems effectively. Optical fiber sensors can provide an alternative to conventional electrical sensors for such applications, with high accuracy, long term stability, streamlined installation, and premium performance under harsh environmental conditions. These optical fiber sensors offer immunity to EMI and extraordinary resistance to mechanical fatigue and therefore they will have great potential in on-line monitoring applications in Smart Grid. In this paper, we present a summary of the on-line monitoring needs of Smart Grid and explore the use of optical fiber sensors in Smart Grid. First, the on-line monitoring needs of Smart Grid is summarized. Second, a review on optical fiber sensor technology is given. Third, the application of optical fiber sensors in Smart Grid is discussed, including transmission line monitoring, primary equipment monitoring and substation perimeter intrusion detection. Finally, future research directions of optical fiber sensors for power systems are discussed. Compared to other traditional electrical sensors, the application of optical fiber sensors in Smart Grid has unique advantages.

  13. Low Job Satisfaction Among Physicians in Egypt

    Directory of Open Access Journals (Sweden)

    Amira Gamal Abdel-Rahman

    2008-04-01

    Full Text Available AIM/BACKGROUND: Physician’s job satisfaction is a cornerstone for improving the quality of health care, and its continuity. To identify the extent of job satisfaction and explain its main components among physicians, together with finding out the main indicators for job satisfaction. METHODS: We randomly selected physicians from the Egyptian Ministry of Health and Population Hospitals. All participants were asked to fill a self administrated questionnaire which included data pertaining socio-demographic characteristics and job satisfaction regarding salaries/incentives, monitoring, administration system, management, career satisfaction, relationship with colleagues, social support, opportunities for promotion, and job responsibilities. Satisfied was defined as satisfaction of>60%. RESULTS: Two hundred and thirty eight physicians participated in this study; with mean age of 37.1+ 9.4 years, and 70.2% were males. Only 42.9% of the physicians’ reported job satisfaction. Relationship with colleagues was the most important component of satisfaction with mean of 81.3+19.6 while, salaries/incentives were the least one with mean of 16.2+ 14. The overall current satisfying domains were not significantly associated with marital status or educational level, however it was significantly associated with specialty. Neither age nor gender was significantly associated with the degree of job satisfaction. CONCLUSION: Our results call for paying more attention to improve physicians’ job satisfaction in Egypt, to meet needed higher standards in health care. [TAF Prev Med Bull. 2008; 7(2: 91-96

  14. Reliability Engineering for ATLAS Petascale Data Processing on the Grid

    CERN Document Server

    Golubkov, D V; The ATLAS collaboration; Vaniachine, A V

    2012-01-01

    The ATLAS detector is in its third year of continuous LHC running taking data for physics analysis. A starting point for ATLAS physics analysis is reconstruction of the raw data. First-pass processing takes place shortly after data taking, followed later by reprocessing of the raw data with updated software and calibrations to improve the quality of the reconstructed data for physics analysis. Data reprocessing involves a significant commitment of computing resources and is conducted on the Grid. The reconstruction of one petabyte of ATLAS data with 1B collision events from the LHC takes about three million core-hours. Petascale data processing on the Grid involves millions of data processing jobs. At such scales, the reprocessing must handle a continuous stream of failures. Automatic job resubmission recovers transient failures at the cost of CPU time used by the failed jobs. Orchestrating ATLAS data processing applications to ensure efficient usage of tens of thousands of CPU-cores, reliability engineering ...

  15. Job Sharing in Education.

    Science.gov (United States)

    Davidson, Wilma; Kline, Susan

    1979-01-01

    The author presents the advantages of job sharing for all school personnel, saying that education is particularly adaptable to this new form of employment. Current job sharing programs in Massachusetts, California, and New Jersey schools are briefly discussed. (SJL)

  16. Increased Observability in Electric Distribution Grids

    DEFF Research Database (Denmark)

    Prostejovsky, Alexander Maria

    This thesis addresses supervision and control of horizontally integrated electric power systems, in which Distribution System Operators (DSOs) assume an active role. Focus lies on the technical possibilities emerging from the expanding Information and Communication Technology (ICT) and monitoring...... infrastructure in distribution grids. Strong emphasis is placed on experimental verifications of the investigated concepts wherever applicable. Electric grids are changing, and so are the roles of system operators. The interest in sustainable energy and the rapidly increasing number of Distributed Energy...... of development of technical solutions and regulations, near-complete observability of the electric grid will be achieved in the foreseeable future. Harnessing the increased observability already benefits the unbundling of electricity markets, and is imperative to ensure security of the grid....

  17. High density grids

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Aina E.; Baxter, Elizabeth L.

    2018-01-16

    An X-ray data collection grid device is provided that includes a magnetic base that is compatible with robotic sample mounting systems used at synchrotron beamlines, a grid element fixedly attached to the magnetic base, where the grid element includes at least one sealable sample window disposed through a planar synchrotron-compatible material, where the planar synchrotron-compatible material includes at least one automated X-ray positioning and fluid handling robot fiducial mark.

  18. LHC computing grid

    International Nuclear Information System (INIS)

    Novaes, Sergio

    2011-01-01

    Full text: We give an overview of the grid computing initiatives in the Americas. High-Energy Physics has played a very important role in the development of grid computing in the world and in Latin America it has not been different. Lately, the grid concept has expanded its reach across all branches of e-Science, and we have witnessed the birth of the first nationwide infrastructures and its use in the private sector. (author)

  19. Urban micro-grids

    International Nuclear Information System (INIS)

    Faure, Maeva; Salmon, Martin; El Fadili, Safae; Payen, Luc; Kerlero, Guillaume; Banner, Arnaud; Ehinger, Andreas; Illouz, Sebastien; Picot, Roland; Jolivet, Veronique; Michon Savarit, Jeanne; Strang, Karl Axel

    2017-02-01

    ENEA Consulting published the results of a study on urban micro-grids conducted in partnership with the Group ADP, the Group Caisse des Depots, ENEDIS, Omexom, Total and the Tuck Foundation. This study offers a vision of the definition of an urban micro-grid, the value brought by a micro-grid in different contexts based on real case studies, and the upcoming challenges that micro-grid stakeholders will face (regulation, business models, technology). The electric production and distribution system, as the backbone of an increasingly urbanized and energy dependent society, is urged to shift towards a more resilient, efficient and environment-friendly infrastructure. Decentralisation of electricity production into densely populated areas is a promising opportunity to achieve this transition. A micro-grid enhances local production through clustering electricity producers and consumers within a delimited electricity network; it has the ability to disconnect from the main grid for a limited period of time, offering an energy security service to its customers during grid outages for example. However: The islanding capability is an inherent feature of the micro-grid concept that leads to a significant premium on electricity cost, especially in a system highly reliant on intermittent electricity production. In this case, a smart grid, with local energy production and no islanding capability, can be customized to meet relevant sustainability and cost savings goals at lower costs For industrials, urban micro-grids can be economically profitable in presence of high share of reliable energy production and thermal energy demand micro-grids face strong regulatory challenges that should be overcome for further development Whether islanding is or is not implemented into the system, end-user demand for a greener, more local, cheaper and more reliable energy, as well as additional services to the grid, are strong drivers for local production and consumption. In some specific cases

  20. Micro grids toward the smart grid

    International Nuclear Information System (INIS)

    Guerrero, J.

    2011-01-01

    Worldwide electrical grids are expecting to become smarter in the near future, with interest in Microgrids likely to grow. A microgrid can be defined as a part of the grid with elements of prime energy movers, power electronics converters, distributed energy storage systems and local loads, that can operate autonomously but also interacting with main grid. Thus, the ability of intelligent Microgrids to operate in island mode or connected to the grid will be a keypoint to cope with new functionalities and the integration of renewable energy resources. The functionalities expected for these small grids are: black start operation, frequency and voltage stability, active and reactive power flow control, active power filter capabilities, and storage energy management. In this presentation, a review of the main concepts related to flexible Microgrids will be introduced, with examples of real Microgrids. AC and DC Microgrids to integrate renewable and distributed energy resources will also be presented, as well as distributed energy storage systems, and standardization issues of these Microgrids. Finally, Microgrid hierarchical control will be analyzed looking at three different levels: i) a primary control based on the droop method, including an output impedance virtual loop; ii) a secondary control, which enables restoring any deviations produced by the primary control; and iii) a tertiary control to manage the power flow between the microgrid and the external electrical distribution system.

  1. Extending the Fermi-LAT Data Processing Pipeline to the Grid

    Science.gov (United States)

    Zimmer, S.; Arrabito, L.; Glanzman, T.; Johnson, T.; Lavalley, C.; Tsaregorodtsev, A.

    2012-12-01

    The Data Handling Pipeline (“Pipeline”) has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the collaboration and the Fermi Science Support Center. Aside from the reconstruction of raw data from the satellite (Level 1), data reprocessing and various event-level analyses are also reasonably heavy loads on the pipeline and computing resources. These other loads, unlike Level 1, can run continuously for weeks or months at a time. In addition it receives heavy use in performing production Monte Carlo tasks. In daily use it receives a new data download every 3 hours and launches about 2000 jobs to process each download, typically completing the processing of the data before the next download arrives. The need for manual intervention has been reduced to less than 0.01% of submitted jobs. The Pipeline software is written almost entirely in Java and comprises several modules. The software comprises web-services that allow online monitoring and provides charts summarizing work flow aspects and performance information. The server supports communication with several batch systems such as LSF and BQS and recently also Sun Grid Engine and Condor. This is accomplished through dedicated job control services that for Fermi are running at SLAC and the other computing site involved in this large scale framework, the Lyon computing center of IN2P3. While being different in the logic of a task, we evaluate a separate interface to the Dirac system in order to communicate with EGI sites to utilize Grid resources, using dedicated Grid optimized systems rather than developing our own. More recently the Pipeline and its associated data catalog have been generalized for use by other experiments, and are

  2. Automated agents for management and control of the ALICE Computing Grid

    CERN Document Server

    Grigoras, C; Carminati, F; Legrand, I; Voicu, R

    2010-01-01

    A complex software environment such as the ALICE Computing Grid infrastructure requires permanent control and management for the large set of services involved. Automating control procedures reduces the human interaction with the various components of the system and yields better availability of the overall system. In this paper we will present how we used the MonALISA framework to gather, store and display the relevant metrics in the entire system from central and remote site services. We will also show the automatic local and global procedures that are triggered by the monitored values. Decision-taking agents are used to restart remote services, alert the operators in case of problems that cannot be automatically solved, submit production jobs, replicate and analyze raw data, resource load-balance and other control mechanisms that optimize the overall work flow and simplify day-to-day operations. Synthetic graphical views for all operational parameters, correlations, state of services and applications as we...

  3. Distributed Geant4 simulation in medical and space science applications using DIANE framework and the GRID

    CERN Document Server

    Moscicki, J T; Mantero, A; Pia, M G

    2003-01-01

    Distributed computing is one of the most important trends in IT which has recently gained significance for large-scale scientific applications. Distributed analysis environment (DIANE) is a R&D study, focusing on semiinteractive parallel and remote data analysis and simulation, which has been conducted at CERN. DIANE provides necessary software infrastructure for parallel scientific applications in the master-worker model. Advanced error recovery policies, automatic book-keeping of distributed jobs and on-line monitoring and control tools are provided. DIANE makes a transparent use of a number of different middleware implementations such as load balancing service (LSF, PBS, GRID Resource Broker, Condor) and security service (GSI, Kerberos, openssh). A number of distributed Geant 4 simulations have been deployed and tested, ranging from interactive radiotherapy treatment planning using dedicated clusters in hospitals, to globally-distributed simulations of astrophysics experiments using the European data g...

  4. The play grid

    DEFF Research Database (Denmark)

    Fogh, Rune; Johansen, Asger

    2013-01-01

    In this paper we propose The Play Grid, a model for systemizing different play types. The approach is psychological by nature and the actual Play Grid is based, therefore, on two pairs of fundamental and widely acknowledged distinguishing characteristics of the ego, namely: extraversion vs...... at the Play Grid. Thus, the model has four quadrants, each of them describing one of four play types: the Assembler, the Director, the Explorer, and the Improviser. It is our hope that the Play Grid can be a useful design tool for making entertainment products for children....

  5. Hybrid job shop scheduling

    NARCIS (Netherlands)

    Schutten, Johannes M.J.

    1995-01-01

    We consider the problem of scheduling jobs in a hybrid job shop. We use the term 'hybrid' to indicate that we consider a lot of extensions of the classic job shop, such as transportation times, multiple resources, and setup times. The Shifting Bottleneck procedure can be generalized to deal with

  6. Bangladesh Jobs Diagnostic

    OpenAIRE

    Farole, Thomas; Cho, Yoonyoung

    2017-01-01

    This Jobs Diagnostic presents the characteristics and constraints of the labor market in Bangladesh, identifies the objectives of the jobs agenda, and proposes a policy framework to progress toward them. This multisectoral diagnostic assesses the relationships between supply- and demand-side factors that interact to determine job creation, quality, and inclusion outcomes. Understanding the...

  7. Practical job shop scheduling

    NARCIS (Netherlands)

    Schutten, Johannes M.J.

    1998-01-01

    The Shifting Bottleneck procedure is an intuitive and reasonably good approximation algorithm for the notoriously difficult classical job shop scheduling problem. The principle of decomposing a classical job shop problem into a series of single-machine problems can also easily be applied to job shop

  8. Job Displacement and Crime

    DEFF Research Database (Denmark)

    Bennett, Patrick; Ouazad, Amine

    We use a detailed employer-employee data set matched with detailed crime information (timing of crime, fines, convictions, crime type) to estimate the impact of job loss on an individual's probability to commit crime. We focus on job losses due to displacement, i.e. job losses in firms losing...

  9. A Security Monitoring Framework For Virtualization Based HEP Infrastructures

    Science.gov (United States)

    Gomez Ramirez, A.; Martinez Pedreira, M.; Grigoras, C.; Betev, L.; Lara, C.; Kebschull, U.; ALICE Collaboration

    2017-10-01

    High Energy Physics (HEP) distributed computing infrastructures require automatic tools to monitor, analyze and react to potential security incidents. These tools should collect and inspect data such as resource consumption, logs and sequence of system calls for detecting anomalies that indicate the presence of a malicious agent. They should also be able to perform automated reactions to attacks without administrator intervention. We describe a novel framework that accomplishes these requirements, with a proof of concept implementation for the ALICE experiment at CERN. We show how we achieve a fully virtualized environment that improves the security by isolating services and Jobs without a significant performance impact. We also describe a collected dataset for Machine Learning based Intrusion Prevention and Detection Systems on Grid computing. This dataset is composed of resource consumption measurements (such as CPU, RAM and network traffic), logfiles from operating system services, and system call data collected from production Jobs running in an ALICE Grid test site and a big set of malware samples. This malware set was collected from security research sites. Based on this dataset, we will proceed to develop Machine Learning algorithms able to detect malicious Jobs.

  10. Improved ATLAS HammerCloud Monitoring for local Site Administration

    CERN Document Server

    Boehler, Michael; The ATLAS collaboration; Hoenig, Friedrich; Legger, Federica; Sciacca, Francesco Giovanni; Mancinelli, Valentina

    2015-01-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionality has been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This paper summarizes the different developments and optimiz...

  11. Grid administration: towards an autonomic approach

    CERN Document Server

    Ubeda Garcia, M; Tsaregorodtsev, A; Charpentier, P; Bernardof, V

    2012-01-01

    Within the DIRAC framework in the LHCb collaboration, we deployed an autonomous policy system acting as a central status information point for grid elements. Experts working as grid administrators have a broad and very deep knowledge about the underlying system which makes them very precious. We have attempted to formalize this knowledge in an autonomous system able to aggregate information, draw conclusions, validate them, and take actions accordingly. The DIRAC Resource Status System (RSS) is a monitoring and generic policy system that enforces managerial and operational actions automatically. As an example, the status of a grid entity can be evaluated using a number of policies, each making assessments relative to specific monitoring information. Individual results of these policies can be combined to evaluate and propose a global status for the resource. This evaluation goes through a validation step driven by a state machine and an external validation system. Once validated, actions can be triggered acco...

  12. User Inspired Management of Scientific Jobs in Grids and Clouds

    Science.gov (United States)

    Withana, Eran Chinthaka

    2011-01-01

    From time-critical, real time computational experimentation to applications which process petabytes of data there is a continuing search for faster, more responsive computing platforms capable of supporting computational experimentation. Weather forecast models, for instance, process gigabytes of data to produce regional (mesoscale) predictions on…

  13. Socioeconomic assessment of smart grids. Summary

    International Nuclear Information System (INIS)

    2015-07-01

    In September of 2013, the President of France identified smart grids as an important part of the country's industrial strategy, given the opportunities and advantages they can offer French industry, and asked the Chairman of the RTE Management Board to prepare a road-map outlining ways to support and accelerate smart grid development. This road-map, prepared in cooperation with stakeholders from the power and smart grids industries, identifies ten actions that can be taken in priority to consolidate the smart grids sector and help French firms play a leading role in the segment. These priorities were presented to the President of France on 7 May 2014. Action items 5 and 6 of the road-map on smart grid development relate, respectively, to the quantification of the value of smart grid functions from an economic, environmental and social (impact on employment) standpoint and to the large-scale deployment of some of the functions. Two tasks were set out in the 'Smart Grids' plan for action item 5: - Create a methodological framework that, for all advanced functions, allows the quantification of benefits and costs from an economic, environmental and social (effect on jobs) standpoint; - Quantify, based on this methodological framework, the potential benefits of a set of smart grid functions considered sufficiently mature to be deployed on a large scale in the near future. Having a methodology that can be applied in the same manner to all solutions, taking into account their impacts on the environment and employment in France, will considerably add to and complement the information drawn from demonstration projects. It will notably enable comparisons of benefits provided by smart grid functions and thus help give rise to a French smart grids industry that is competitive. At first, the smart grids industry was organised around demonstration projects testing different advanced functions within specific geographic areas. These projects covered a wide enough

  14. Developing survey grids to substantiate freedom from exotic pests

    Science.gov (United States)

    John W. Coulston; Frank H. Koch; William D. Smith

    2009-01-01

    Systematic, hierarchical intensification of the Environmental Monitoring and Assessment Program hexagon for North America yields a simple procedure for developing national-scale survey grids. In this article, we describe the steps to create a national-scale survey grid using a risk map as the starting point. We illustrate the steps using an exotic pest example in which...

  15. Adaptively detecting changes in Autonomic Grid Computing

    KAUST Repository

    Zhang, Xiangliang

    2010-10-01

    Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and gridrunning logs. Toward Autonomic Grid Computing, adaptively detecting the changes in a grid system can help to alarm the anomalies, clean the noises, and report the new patterns. In this paper, we proposed an approach of self-adaptive change detection based on the Page-Hinkley statistic test. It handles the non-stationary distribution without the assumption of data distribution and the empirical setting of parameters. We validate the approach on the EGEE streaming jobs, and report its better performance on achieving higher accuracy comparing to the other change detection methods. Meanwhile this change detection process could help to discover the device fault which was not claimed in the system logs. © 2010 IEEE.

  16. Gridded Species Distribution, Version 1: Global Amphibians Presence Grids

    Data.gov (United States)

    National Aeronautics and Space Administration — The Global Amphibians Presence Grids of the Gridded Species Distribution, Version 1 is a reclassified version of the original grids of amphibian species distribution...

  17. Gridded Species Distribution, Version 1: Global Amphibians Original Grids

    Data.gov (United States)

    National Aeronautics and Space Administration — The Global Amphibians Original Grids of the Gridded Species Distribution, Version 1 are converted 1- kilometer grid cell data available in the Geographic Coordinate...

  18. Gridded Species Distribution, Version 1: Global Amphibians Family Richness Grids

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Amphibians Family Richness Grids of the Gridded Species Distribution, Version 1 are aggregations of the presence grids data at the family level. They are...

  19. Index Grids - MDC_USNationalGrid1K

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — The U.S. National Grid is based on universally defined coordinate and grid systems and can, therefore, be easily extended for use world-wide as a universal grid...

  20. Security for grids

    Energy Technology Data Exchange (ETDEWEB)

    Humphrey, Marty; Thompson, Mary R.; Jackson, Keith R.

    2005-08-14

    Securing a Grid environment presents a distinctive set of challenges. This paper groups the activities that need to be secured into four categories: naming and authentication; secure communication; trust, policy, and authorization; and enforcement of access control. It examines the current state of the art in securing these processes and introduces new technologies that promise to meet the security requirements of Grids more completely.

  1. Getting grid users together

    CERN Multimedia

    Appleton, Owen

    2007-01-01

    "While Grid conferences are becoming ever more popular, many of them remain primarily IT events, with few if any users attending. Not so the second EGEE User Forum, an event specifically designed to bring together the diverse user community that makes use of the EGEE grid infrastructure." (1 page)

  2. Planning in Smart Grids

    NARCIS (Netherlands)

    Bosman, M.G.C.

    2012-01-01

    The electricity supply chain is changing, due to increasing awareness for sustainability and an improved energy efficiency. The traditional infrastructure where demand is supplied by centralized generation is subject to a transition towards a Smart Grid. In this Smart Grid, sustainable generation

  3. Perspectives on grid computing

    NARCIS (Netherlands)

    Schwiegelshohn, U.; Badia, R.M.; Bubak, M.T.; Danelutto, M.; Dustdar, S.; Gagliardi, F.; Geiger, A.; Hluchy, L.; Kranzlmüller, D.; Laure, E.; Priol, T.; Reinefeld, A.; Resch, M.; Reuter, A.; Rienhoff, O.; Rüter, T.; Sloot, P.; Talia, D.; Ullmann, K.; Yahyapour, R.; von Voigt, G.

    2010-01-01

    Grid computing has been the subject of many large national and international IT projects. However, not all goals of these projects have been achieved. In particular, the number of users lags behind the initial forecasts laid out by proponents of grid technologies. This underachievement may have led

  4. Battery impedance spectroscopy using bidirectional grid connected ...

    Indian Academy of Sciences (India)

    Shimul Kumar Dam

    Impedance spectroscopy; grid connection; battery converter; state of charge; health monitoring. 1. Introduction. Batteries play an important role as energy storage devices for renewable energy sources, electric vehicle and many other applications. A battery bank is interfaced to load through a power converter, which controls ...

  5. Grid Portal for Image and Video Processing

    International Nuclear Information System (INIS)

    Dinitrovski, I.; Kakasevski, G.; Buckovska, A.; Loskovska, S.

    2007-01-01

    Users are typically best served by G rid Portals . G rid Portals a re web servers that allow the user to configure or run a class of applications. The server is then given the task of authentication of the user with the Grid and invocation of the required grid services to launch the user's application. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. PHP is powerful and modern server-side scripting language producing HTML or XML output which easily can be accessed by everyone via web interface (with the browser of your choice) and can execute shell scripts on the server side. The aim of our work is development of Grid portal for image and video processing. The shell scripts contains gLite and globus commands for obtaining proxy certificate, job submission, data management etc. Using this technique we can easily create web interface to the Grid infrastructure. The image and video processing algorithms are implemented in C++ language using various image processing libraries. (Author)

  6. Grid generation methods

    CERN Document Server

    Liseikin, Vladimir D

    2017-01-01

    This new edition provides a description of current developments relating to grid methods, grid codes, and their applications to actual problems. Grid generation methods are indispensable for the numerical solution of differential equations. Adaptive grid-mapping techniques, in particular, are the main focus and represent a promising tool to deal with systems with singularities. This 3rd edition includes three new chapters on numerical implementations (10), control of grid properties (11), and applications to mechanical, fluid, and plasma related problems (13). Also the other chapters have been updated including new topics, such as curvatures of discrete surfaces (3). Concise descriptions of hybrid mesh generation, drag and sweeping methods, parallel algorithms for mesh generation have been included too. This new edition addresses a broad range of readers: students, researchers, and practitioners in applied mathematics, mechanics, engineering, physics and other areas of applications.

  7. The GRID seminar

    CERN Multimedia

    CERN. Geneva HR-RFA

    2006-01-01

    The Grid infrastructure is a key part of the computing environment for the simulation, processing and analysis of the data of the LHC experiments. These experiments depend on the availability of a worldwide Grid infrastructure in several aspects of their computing model. The Grid middleware will hide much of the complexity of this environment to the user, organizing all the resources in a coherent virtual computer center. The general description of the elements of the Grid, their interconnections and their use by the experiments will be exposed in this talk. The computational and storage capability of the Grid is attracting other research communities beyond the high energy physics. Examples of these applications will be also exposed during the presentation.

  8. Estimating job runtime for CMS analysis jobs

    CERN Document Server

    Sfiligoi, Igor

    2013-01-01

    The basic premise of pilot systems is to create an overlay scheduling system on top of leased resources. And by definition, leases have a limited lifetime, so any job that is scheduled on such resources must finish before the lease is over, or it will be killed and all the computation wasted. In order to effectively schedule jobs to resources, the pilot system thus requires the expected runtime of the users jobs. Past studies have shown that relying on user provided estimates is not a valid strategy, so the system should try to make an estimate by itself. This paper provides a study of the historical data obtained from the CMS Analysis Operations submission system. Clear patterns are observed, suggesting that making prediction of an expected job lifetime range is achievable with high confidence level in this environment.

  9. Optimisation of LHCb Applications for Multi- and Manycore Job Submission

    CERN Document Server

    Rauschmayr, Nathalie; Graciani Diaz, Ricardo; Charpentier, Philippe

    The Worldwide LHC Computing Grid (WLCG) is the largest Computing Grid and is used by all Large Hadron Collider experiments in order to process their recorded data. It provides approximately 400k cores and storages. Nowadays, most of the resources consist of multi- and manycore processors. Conditions at the Large Hadron Collider experiments will change and much larger workloads and jobs consuming more memory are expected in future. This has lead to a shift of paradigm which focuses on executing jobs as multiprocessor tasks in order to use multi- and manycore processors more efficiently. All experiments at CERN are currently investigating how such computing resources can be used more efficiently in terms of memory requirements and handling of concurrency. Until now, there are still many unsolved issues regarding software, scheduling, CPU accounting, task queues, which need to be solved by grid sites and experiments. This thesis develops a systematic approach to optimise the software of the LHCb experiment fo...

  10. Deadline aware virtual machine scheduler for scientific grids and cloud computing

    CERN Document Server

    Khalid, Omer; Anthony, Richard; Petridis, Miltos; Parrot, Kevin; Schulz, Markus; 10.1109/WAINA.2010.107

    2010-01-01

    Virtualization technology has enabled applications to be decoupled from the underlying hardware providing the benefits of portability, better control over execution environment and isolation. It has been widely adopted in scientific grids and commercial clouds. Since virtualization, despite its benefits incurs a performance penalty, which could be significant for systems dealing with uncertainty such as High Performance Computing (HPC) applications where jobs have tight deadlines and have dependencies on other jobs before they could run. The major obstacle lies in bridging the gap between performance requirements of a job and performance offered by the virtualization technology if the jobs were to be executed in virtual machines. In this paper, we present a novel approach to optimize job deadlines when run in virtual machines by developing a deadline-aware algorithm that responds to job execution delays in real time, and dynamically optimizes jobs to meet their deadline obligations. Our approaches borrowed co...

  11. BSCW Unstructured Grids - VGRID software

    Data.gov (United States)

    National Aeronautics and Space Administration — These grids were constructed using VGRID software from NASA Langley. The grids designed for node based (labeled 'nc') and cell-centered solvers are supplied. Grids...

  12. HIRENASD Unstructured Grids - VGRID software

    Data.gov (United States)

    National Aeronautics and Space Administration — These grids were constructed using VGRID software from NASA Langley. The grids designed for node based (labeled 'nc') and cell-centered solvers are supplied. Grids...

  13. Are healthcare middle management jobs extreme jobs?

    Science.gov (United States)

    Buchanan, David A; Parry, Emma; Gascoigne, Charlotte; Moore, Cíara

    2013-01-01

    The purpose of this paper is to explore the incidence of "extreme jobs" among middle managers in acute hospitals, and to identify individual and organizational implications. The paper is based on interviews and focus groups with managers at six hospitals, a "proof of concept" pilot with an operations management team, and a survey administered at five hospitals. Six of the original dimensions of extreme jobs, identified in commercial settings, apply to hospital management: long hours, unpredictable work patterns, tight deadlines with fast pace, broad responsibility, "24/7 availability", mentoring and coaching. Six healthcare-specific dimensions were identified: making life or death decisions, conflicting priorities, being required to do more with fewer resources, responding to regulatory bodies, the need to involve many people before introducing improvements, fighting a negative climate. Around 75 per cent of hospital middle managers have extreme jobs. This extreme healthcare management job model was derived inductively from a qualitative study involving a small number of respondents. While the evidence suggests that extreme jobs are common, further research is required to assess the antecedents, incidence, and implications of these working practices. A varied, intense, fast-paced role with responsibility and long hours can be rewarding, for some. However, multi-tasking across complex roles can lead to fatigue, burnout, and mistakes, patient care may be compromised, and family life may be adversely affected. As far as the authors can ascertain, there are no other studies exploring acute sector management roles through an extreme jobs lens.

  14. Single and double grid long-range alpha detectors

    International Nuclear Information System (INIS)

    MacArthur, D.W.; Allander, K.S.

    1993-01-01

    Alpha particle detectors capable of detecting alpha radiation from distant sources. In one embodiment, a voltage is generated in a single electrically conductive grid while a fan draws air containing air molecules ionized by alpha particles through an air passage and across the conductive grid. The current in the conductive grid can be detected and used for measurement or alarm. Another embodiment builds on this concept and provides an additional grid so that air ions of both polarities can be detected. The detector can be used in many applications, such as for pipe or duct, tank, or soil sample monitoring

  15. The Czech National Grid Infrastructure

    Science.gov (United States)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  16. Wedded to the job: moderating effects of job involvement on the consequences of job insecurity.

    Science.gov (United States)

    Probst, T M

    2000-01-01

    Two hundred eighty-three public-sector employees experiencing a workplace reorganization completed surveys assessing the relationships between job involvement and job insecurity on self-report measures of psychological, behavioral, and physical outcomes. Using C. L. Hulin's (1991) job adaptation theory, differential predictions were made regarding the specific outcomes of job insecurity for high job involvement versus low job involvement employees. Results indicate that employees who were highly invested in their jobs were most adversely affected by job insecurity. Specifically, they reported more negative job attitudes, more health problems, and a higher level of psychological distress than their less involved counterparts when they perceived their jobs to be threatened.

  17. The open science grid

    International Nuclear Information System (INIS)

    Pordes, R.

    2004-01-01

    The U.S. LHC Tier-1 and Tier-2 laboratories and universities are developing production Grids to support LHC applications running across a worldwide Grid computing system. Together with partners in computer science, physics grid projects and active experiments, we will build a common national production grid infrastructure which is open in its architecture, implementation and use. The Open Science Grid (OSG) model builds upon the successful approach of last year's joint Grid2003 project. The Grid3 shared infrastructure has for over eight months provided significant computational resources and throughput to a range of applications, including ATLAS and CMS data challenges, SDSS, LIGO, and biology analyses, and computer science demonstrators and experiments. To move towards LHC-scale data management, access and analysis capabilities, we must increase the scale, services, and sustainability of the current infrastructure by an order of magnitude or more. Thus, we must achieve a significant upgrade in its functionalities and technologies. The initial OSG partners will build upon a fully usable, sustainable and robust grid. Initial partners include the US LHC collaborations, DOE and NSF Laboratories and Universities and Trillium Grid projects. The approach is to federate with other application communities in the U.S. to build a shared infrastructure open to other sciences and capable of being modified and improved to respond to needs of other applications, including CDF, D0, BaBar, and RHIC experiments. We describe the application-driven, engineered services of the OSG, short term plans and status, and the roadmap for a consortium, its partnerships and national focus

  18. Do Job Security Guarantees Work?

    OpenAIRE

    Alex Bryson; Lorenzo Cappellari; Claudio Lucifora

    2004-01-01

    We investigate the effect of employer job security guarantees on employee perceptions of job security. Using linked employer-employee data from the 1998 British Workplace Employee Relations Survey, we find job security guarantees reduce employee perceptions of job insecurity. This finding is robust to endogenous selection of job security guarantees by employers engaging in organisational change and workforce reductions. Furthermore, there is no evidence that increased job security through job...

  19. Trends in life science grid: from computing grid to knowledge grid

    Directory of Open Access Journals (Sweden)

    Konagaya Akihiko

    2006-12-01

    Full Text Available Abstract Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  20. Trends in life science grid: from computing grid to knowledge grid.

    Science.gov (United States)

    Konagaya, Akihiko

    2006-12-18

    Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  1. Recognizing Job Safety Hazards. Module SH-09. Safety and Health.

    Science.gov (United States)

    Center for Occupational Research and Development, Inc., Waco, TX.

    This student module on recognizing job safety hazards is one of 50 modules concerned with job safety and health. This module details employee and employer responsibilities in correcting and monitoring safety hazards. Following the introduction, 10 objectives (each keyed to a page in the text) the student is expected to accomplish are listed (e.g.,…

  2. Desktop grid computing

    CERN Document Server

    Cerin, Christophe

    2012-01-01

    Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance. The book's first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical

  3. Transmission grid security

    CERN Document Server

    Haarla, Liisa; Hirvonen, Ritva; Labeau, Pierre-Etienne

    2011-01-01

    In response to the growing importance of power system security and reliability, ""Transmission Grid Security"" proposes a systematic and probabilistic approach for transmission grid security analysis. The analysis presented uses probabilistic safety assessment (PSA) and takes into account the power system dynamics after severe faults. In the method shown in this book the power system states (stable, not stable, system breakdown, etc.) are connected with the substation reliability model. In this way it is possible to: estimate the system-wide consequences of grid faults; identify a chain of eve

  4. Optimizing the calculation grid for atmospheric dispersion modelling.

    Science.gov (United States)

    Van Thielen, S; Turcanu, C; Camps, J; Keppens, R

    2015-04-01

    This paper presents three approaches to find optimized grids for atmospheric dispersion measurements and calculations in emergency planning. This can be useful for deriving optimal positions for mobile monitoring stations, or help to reduce discretization errors and improve recommendations. Indeed, threshold-based recommendations or conclusions may differ strongly on the shape and size of the grid on which atmospheric dispersion measurements or calculations of pollutants are based. Therefore, relatively sparse grids that retain as much information as possible, are required. The grid optimization procedure proposed here is first demonstrated with a simple Gaussian plume model as adopted in atmospheric dispersion calculations, which provides fast calculations. The optimized grids are compared to the Noodplan grid, currently used for emergency planning in Belgium, and to the exact solution. We then demonstrate how it can be used in more realistic dispersion models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Job security and job satisfaction among Greek fitness instructors.

    Science.gov (United States)

    Koustelios, Athanasios; Kouli, Olga; Theodorakis, Nicholas

    2003-08-01

    In analyzing the relation between job satisfaction and job security, a sample of 97 Greek fitness instructors, 18 to 42 years of age, showed statistically significant positive correlations between job security and job satisfaction (pjob security was correlated with pay .54, promotion .43, job itself .41, and the organization as a whole .43.

  6. Job stress and job involvement of professionals and ...

    African Journals Online (AJOL)

    Job stress and job involvement of professionals and paraprofessionals in academic libraries: A case study of University of Ibadan, Nigeria and Obafemi Awolowo ... between job stress and job involvement of library professionals and Para profe ssionals, no significant difference in job involvement of professionals and Para ...

  7. A Computational Differential Geometry Approach to Grid Generation

    CERN Document Server

    Liseikin, Vladimir D

    2007-01-01

    The process of breaking up a physical domain into smaller sub-domains, known as meshing, facilitates the numerical solution of partial differential equations used to simulate physical systems. This monograph gives a detailed treatment of applications of geometric methods to advanced grid technology. It focuses on and describes a comprehensive approach based on the numerical solution of inverted Beltramian and diffusion equations with respect to monitor metrics for generating both structured and unstructured grids in domains and on surfaces. In this second edition the author takes a more detailed and practice-oriented approach towards explaining how to implement the method by: Employing geometric and numerical analyses of monitor metrics as the basis for developing efficient tools for controlling grid properties. Describing new grid generation codes based on finite differences for generating both structured and unstructured surface and domain grids. Providing examples of applications of the codes to the genera...

  8. Monitoring of services with non-relational databases and map-reduce framework

    International Nuclear Information System (INIS)

    Babik, M; Souto, F

    2012-01-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  9. Does Technology Create Jobs?

    OpenAIRE

    Henderson, David R.; Krugman, Paul

    1997-01-01

    Two leading economists, MIT's Paul Krugman and the Hoover Institution's David R. Henderson, debate whether jobs lost to technology are met by a net increase in jobs elsewhere in a more productive economy. Krugman, a noted liberal, says maybe in the long run, but for now ordinary workers see their wages falling. Henderson, a conservative, says that the problem is not the elimination of jobs through technology but a workforce with inadequate skills.

  10. Declining job security

    OpenAIRE

    Robert G. Valletta

    1998-01-01

    Although common belief and recent evidence point to a decline in "job security," the academic literature to date has been noticeably silent regarding the behavioral underpinnings of declining job security. In this paper, I define job security in the context of implicit contracts designed to overcome incentive problems in the employment relationship. Contracts of this nature imply the possibility of inefficient separations in response to adverse shocks, and they generate predictions concerning...

  11. Quality Assurance Framework for Mini-Grids

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, Ian [National Renewable Energy Lab. (NREL), Golden, CO (United States); Burman, Kari [National Renewable Energy Lab. (NREL), Golden, CO (United States); Singh, Mohit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Esterly, Sean [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mutiso, Rose [US Department of Energy, Washington, DC (United States); McGregor, Caroline [US Department of Energy, Washington, DC (United States)

    2016-11-01

    Providing clean and affordable energy services to the more than 1 billion people globally who lack access to electricity is a critical driver for poverty reduction, economic development, improved health, and social outcomes. More than 84% of populations without electricity are located in rural areas where traditional grid extension may not be cost-effective; therefore, distributed energy solutions such as mini-grids are critical. To address some of the root challenges of providing safe, quality, and financially viable mini-grid power systems to remote customers, the U.S. Department of Energy (DOE) teamed with the National Renewable Energy Laboratory (NREL) to develop a Quality Assurance Framework (QAF) for isolated mini-grids. The QAF for mini-grids aims to address some root challenges of providing safe, quality, and affordable power to remote customers via financially viable mini-grids through two key components: (1) Levels of service: Defines a standard set of tiers of end-user service and links them to technical parameters of power quality, power availability, and power reliability. These levels of service span the entire energy ladder, from basic energy service to high-quality, high-reliability, and high-availability service (often considered 'grid parity'); (2) Accountability and performance reporting framework: Provides a clear process of validating power delivery by providing trusted information to customers, funders, and/or regulators. The performance reporting protocol can also serve as a robust monitoring and evaluation tool for mini-grid operators and funding organizations. The QAF will provide a flexible alternative to rigid top-down standards for mini-grids in energy access contexts, outlining tiers of end-user service and linking them to relevant technical parameters. In addition, data generated through implementation of the QAF will provide the foundation for comparisons across projects, assessment of impacts, and greater confidence that

  12. Solid State Grid Modulator

    National Research Council Canada - National Science Library

    Jones, Franklin

    2001-01-01

    This program was for the design, construction and test of two Solid State Grid Modulators to provide enhanced performance and improved reliability in existing S-band radar transmitters at the Rome Research Site...

  13. Controlling smart grid adaptivity

    NARCIS (Netherlands)

    Toersche, Hermen; Nykamp, Stefan; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2012-01-01

    Methods are discussed for planning oriented smart grid control to cope with scenarios with limited predictability, supporting an increasing penetration of stochastic renewable resources. The performance of these methods is evaluated with simulations using measured wind generation and consumption

  14. Meet the Grid

    CERN Multimedia

    Yurkewicz, Katie

    2005-01-01

    Today's cutting-edge scientific projects are larger, more complex, and more expensive than ever. Grid computing provides the resources that allow researchers to share knowledge, data, and computer processing power across boundaries

  15. Spacer grid corner gusset

    International Nuclear Information System (INIS)

    Larson, J.G.

    1984-01-01

    There is provided a spacer grid for a bundle of longitudinally extending rods in spaced generally parallel relationship comprising spacing means for holding the rods in spaced generally parallel relationship; the spacing means includes at least one exterior grid strip circumscribing the bundle of rods along the periphery thereof; with at least one exterior grid strip having a first edge defining the boundary of the strip in one longitudinal direction and a second edge defining the boundary of the strip in the other longitudinal direction; with at least one exterior grid strip having at least one band formed therein parallel to the longitudinal direction; a plurality of corner gussets truncating each of a plurality of corners formed by at least one band and the first edge and the second edge

  16. World Wide Grid

    CERN Multimedia

    Grätzel von Grätz, Philipp

    2007-01-01

    Whether for genetic risk analysis or 3D-rekonstruktion of the cerebral vessels: the modern medicine requires more computing power. With a grid infrastructure, this one can be if necessary called by the network. (4 pages)

  17. Lincoln Laboratory Grid

    Data.gov (United States)

    Federal Laboratory Consortium — The Lincoln Laboratory Grid (LLGrid) is an interactive, on-demand parallel computing system that uses a large computing cluster to enable Laboratory researchers to...

  18. US National Grid

    Data.gov (United States)

    Kansas Data Access and Support Center — This is a polygon feature data layer of United States National Grid (1000m x 1000m polygons ) constructed by the Center for Interdisciplinary Geospatial Information...

  19. Smart grids - French Expertise

    International Nuclear Information System (INIS)

    2015-11-01

    The adaptation of electrical systems is the focus of major work worldwide. Bringing electricity to new territories, modernizing existing electricity grids, implementing energy efficiency policies and deploying renewable energies, developing new uses for electricity, introducing electric vehicles - these are the challenges facing a multitude of regions and countries. Smart Grids are the result of the convergence of electrical systems technologies with information and communications technologies. They play a key role in addressing the above challenges. Smart Grid development is a major priority for both public and private-sector actors in France. The experience of French companies has grown with the current French electricity system, a system that already shows extensive levels of 'intelligence', efficiency and competitiveness. French expertise also leverages substantial competence in terms of 'systems engineering', and can provide a tailored response to meet all sorts of needs. French products and services span all the technical and commercial building blocks that make up the Smart Grid value chain. They address the following issues: Improving the use and valuation of renewable energies and decentralized means of production, by optimizing the balance between generation and consumption. Strengthening the intelligence of the transmission and distribution grids: developing 'Supergrid', digitizing substations in transmission networks, and automating the distribution grids are the focus of a great many projects designed to reinforce the 'self-healing' capacity of the grid. Improving the valuation of decentralized flexibilities: this involves, among others, deploying smart meters, reinforcing active energy efficiency measures, and boosting consumers' contribution to grid balancing, via practices such as demand response which implies the aggregation of flexibility among residential, business, and/or industrial sites. Addressing

  20. Thermal Anemometry Grid Sensor

    OpenAIRE

    Arlit, Martin; Schleicher, Eckhard; Hampel, Uwe

    2017-01-01

    A novel thermal anemometry grid sensor was developed for the simultaneous measurement of cross-sectional temperature and axial velocity distribution in a fluid flow. The sensor consists of a set of platinum resistors arranged in a regular grid. Each platinum resistor allows the simultaneous measurement of fluid temperature via electrical resistance and flow velocity via constant voltage thermal anemometry. Cross-sectional measurement was enabled by applying a special multiplexing-excitation s...

  1. A framework supporting the development of a Grid portal for analysis based on ROI.

    Science.gov (United States)

    Ichikawa, K; Date, S; Kaishima, T; Shimojo, S

    2005-01-01

    In our research on brain function analysis, users require two different simultaneous types of processing: interactive processing to a specific part of data and high-performance batch processing to an entire dataset. The difference between these two types of processing is in whether or not the analysis is for data in the region of interest (ROI). In this study, we propose a Grid portal that has a mechanism to freely assign computing resources to the users on a Grid environment according to the users' two different types of processing requirements. We constructed a Grid portal which integrates interactive processing and batch processing by the following two mechanisms. First, a job steering mechanism controls job execution based on user-tagged priority among organizations with heterogeneous computing resources. Interactive jobs are processed in preference to batch jobs by this mechanism. Second, a priority-based result delivery mechanism that administrates a rank of data significance. The portal ensures a turn-around time of interactive processing by the priority-based job controlling mechanism, and provides the users with quality of services (QoS) for interactive processing. The users can access the analysis results of interactive jobs in preference to the analysis results of batch jobs. The Grid portal has also achieved high-performance computation of MEG analysis with batch processing on the Grid environment. The priority-based job controlling mechanism has been realized to freely assign computing resources to the users' requirements. Furthermore the achievement of high-performance computation contributes greatly to the overall progress of brain science. The portal has thus made it possible for the users to flexibly include the large computational power in what they want to analyze.

  2. Near-Body Grid Adaption for Overset Grids

    Science.gov (United States)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  3. The Relationship between Job Involvement, Job Satisfaction and Organizational Factors.

    Science.gov (United States)

    Porat, A. Ben

    1979-01-01

    The relationship between job involvement and satisfaction in white collar employees of an industrial organization in Israel was studied. Job involvement was related significantly to job satisfaction; however, the relationship was mediated by organizational factors. (Author/BEF)

  4. Life and Job Satisfaction: Is the Job Central?

    Science.gov (United States)

    Schmitt, Neal; Mellon, Phyllis M.

    1980-01-01

    The nature of the causal relationship between life and job satisfaction in males and females working in a variety of jobs was investigated. Results suggest that the life satisfaction causes job satisfaction hypothesis is more tenable than the reverse. (Author)

  5. The Benefits of Grid Networks

    Science.gov (United States)

    Tennant, Roy

    2005-01-01

    In the article, the author talks about the benefits of grid networks. In speaking of grid networks the author is referring to both networks of computers and networks of humans connected together in a grid topology. Examples are provided of how grid networks are beneficial today and the ways in which they have been used.

  6. Job characteristics as determinants of job satisfaction and labour mobility

    OpenAIRE

    Cornelißen, Thomas

    2006-01-01

    This paper investigates the effects of detailed job characteristics on job satisfaction, job search and quits using data from the German Socio-Economic Panel (GSOEP) in a fixed effects framework. Using a factor analysis, seventeen job characteristics are reduced to seven factors that describe different aspects of a job, which are qualified as status, physical strain, autonomy, advancement opportunities, social relations at the work place, work time and job security. The effects of these facto...

  7. Job characteristics: their relationship to job satisfaction, stress and depression

    OpenAIRE

    Steyn, Renier; Vawda, Naseema

    2014-01-01

    This study investigated the influences of job characteristics on job satisfaction, stress and depression among South African white collar workers. Participants were managers in full-time employment with large organisations. They completed the Job Diagnostic Survey, the Perceived Stress Scale and the Beck Depression Inventory. A regression approach was used to predict job satisfaction, stress and depression from job characteristics. Job characteristics (skill variety, task identity, task signi...

  8. Smart Grid Integration Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Troxell, Wade [Colorado State Univ., Fort Collins, CO (United States)

    2011-12-22

    The initial federal funding for the Colorado State University Smart Grid Integration Laboratory is through a Congressionally Directed Project (CDP), DE-OE0000070 Smart Grid Integration Laboratory. The original program requested in three one-year increments for staff acquisition, curriculum development, and instrumentation all which will benefit the Laboratory. This report focuses on the initial phase of staff acquisition which was directed and administered by DOE NETL/ West Virginia under Project Officer Tom George. Using this CDP funding, we have developed the leadership and intellectual capacity for the SGIC. This was accomplished by investing (hiring) a core team of Smart Grid Systems engineering faculty focused on education, research, and innovation of a secure and smart grid infrastructure. The Smart Grid Integration Laboratory will be housed with the separately funded Integrid Laboratory as part of CSU's overall Smart Grid Integration Center (SGIC). The period of performance of this grant was 10/1/2009 to 9/30/2011 which included one no cost extension due to time delays in faculty hiring. The Smart Grid Integration Laboratory's focus is to build foundations to help graduate and undergraduates acquire systems engineering knowledge; conduct innovative research; and team externally with grid smart organizations. Using the results of the separately funded Smart Grid Workforce Education Workshop (May 2009) sponsored by the City of Fort Collins, Northern Colorado Clean Energy Cluster, Colorado State University Continuing Education, Spirae, and Siemens has been used to guide the hiring of faculty, program curriculum and education plan. This project develops faculty leaders with the intellectual capacity to inspire its students to become leaders that substantially contribute to the development and maintenance of Smart Grid infrastructure through topics such as: (1) Distributed energy systems modeling and control; (2) Energy and power conversion; (3

  9. Monitoring ARC services with GangliARC

    International Nuclear Information System (INIS)

    Cameron, D; Karpenko, D

    2012-01-01

    Monitoring of Grid services is essential to provide a smooth experience for users and provide fast and easy to understand diagnostics for administrators running the services. GangliARC makes use of the widely-used Ganglia monitoring tool to present web-based graphical metrics of the ARC computing element. These include statistics of running and finished jobs, data transfer metrics, as well as showing the availability of the computing element and hardware information such as free disk space left in the ARC cache. Ganglia presents metrics as graphs of the value of the metric over time and shows an easily-digestable summary of how the system is performing, and enables quick and easy diagnosis of common problems. This paper describes how GangliARC works and shows numerous examples of how the generated data can quickly be used by an administrator to investigate problems. It also presents possibilities of combining GangliARC with other commonly-used monitoring tools such as Nagios to easily integrate ARC monitoring into the regular monitoring infrastructure of any site or computing centre.

  10. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    International Nuclear Information System (INIS)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-01-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we describe the WNoDeS architecture.

  11. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    Science.gov (United States)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-12-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we descrive the WNoDeS architecture.

  12. Job Instruction Training.

    Science.gov (United States)

    Pfau, Richard H.

    Job Instruction Training (JIT) is a step-by-step, relatively simple technique used to train employees on the job. It is especially suitable for teaching manual skills or procedures; the trainer is usually an employee's supervisor but can be a co-worker. The JIT technique consists of a series of steps that a supervisor or other instructor follows…

  13. Stress Management: Job Stress

    Science.gov (United States)

    Healthy Lifestyle Stress management Job stress can be all-consuming — but it doesn't have to be. Address your triggers, keep perspective and know when ... effects of stress at work. Effectively coping with job stress can benefit both your professional and personal ...

  14. Branding McJobs

    DEFF Research Database (Denmark)

    Noppeney, Claus; Endrissat, Nada; Kärreman, Dan

    Traditionally, employer branding has been considered relevant for knowledge intensive firms that compete in a ‘war for talent’. However, the continuous rise in service sector jobs and the negative image of these so-called McJobs has motivated a trend in rebranding service work. Building on critical...

  15. GridOrbit public display

    DEFF Research Database (Denmark)

    Ramos, Juan David Hincapie; Tabard, Aurélien; Bardram, Jakob

    2010-01-01

    We introduce GridOrbit, a public awareness display that visualizes the activity of a community grid used in a biology laboratory. This community grid executes bioin-formatics algorithms and relies on users to donate CPU cycles to the grid. The goal of GridOrbit is to create a shared awareness about...... people comment on projects. Our work explores the usage of interactive technologies as enablers for the appropriation of an otherwise invisible infrastructure....

  16. Grid Enabled Geospatial Catalogue Web Service

    Science.gov (United States)

    Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush

    2004-01-01

    Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.

  17. Low Job Satisfaction Among Physicians in Egypt

    Directory of Open Access Journals (Sweden)

    Amira Gamal Abdel-Rahman

    2008-04-01

    Full Text Available AIM/BACKGROUND: Physician’s job satisfaction is a cornerstone for improving the quality of health care, and its continuity. To identify the extent of job satisfaction and explain its main components among physicians, together with finding out the main indicators for job satisfaction. METHODS: We randomly selected physicians from the Egyptian Ministry of Health and Population Hospitals. All participants were asked to fill a self administrated questionnaire which included data pertaining socio-demographic characteristics and job satisfaction regarding salaries/incentives, monitoring, administration system, management, career satisfaction, relationship with colleagues, social support, opportunities for promotion, and job responsibilities. Satisfied was defined as satisfaction of>60%. RESULTS: Two hundred and thirty eight physicians participated in this study; with mean age of 37.1+ 9.4 years, and 70.2% were males. Only 42.9% of the physicians’ reported job satisfaction. Relationship with colleagues was the most important component of satisfaction with mean of 81.3+19.6 while, salaries/incentives were the least one with mean of 16.2+ 14. The overall current satisfying domains were not significantly associated with marital status or educational level, however it was significantly associated with specialty. Neither age nor gender was significantly associated with the degree of job satisfaction. CONCLUSION: Our results call for paying more attention to improve physicians’ job satisfaction in Egypt, to meet needed higher standards in health care. [TAF Prev Med Bull 2008; 7(2.000: 91-96

  18. Monitoring of services with non-relational databases and map-reduce framework

    CERN Document Server

    Babik, M; CERN. Geneva. IT Department

    2012-01-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their exi...

  19. Job demands-resources model

    OpenAIRE

    Bakker, Arnold; Demerouti, Eva

    2013-01-01

    markdownabstract* The question of what causes job stress and what motivates people has received a lot of research attention during the past five decades. In this paper, we discuss Job Demands-Resources (JD-R) theory, which represents an extension of the Job Demands-Resources model (Bakker & Demerouti, 2007; Demerouti, Bakker, Nachreiner, & Schaufeli, 2001) and is inspired by job design and job stress theories. JD-R theory explains how job demands and resources have unique and multiplicative e...

  20. Job Security as an Endogenous Job Characteristic

    DEFF Research Database (Denmark)

    Jahn, Elke; Wagner, Thomas

    This paper develops a hedonic model of job security (JS). Workers with hetero-geneous JS-preferences pay the hedonic price for JS to employers, who incur labor-hoarding costs from supplying JS. In contrast to the Wage-Bill Argument, equilibrium unemployment is strictly positive, as workers with w...

  1. Job Security as an Endogenous Job Characteristic

    DEFF Research Database (Denmark)

    Jahn, Elke; Wagner, Thomas

    This paper develops a hedonic model of job security (JS). Workers with heterogeneous JS-preferences pay the hedonic price for JS to employers, who incur labor-hoarding costs from supplying JS. In contrast to the Wage-Bill Argument, equilibrium unemployment is strictly positive, as workers with we...

  2. Job Security as an Endogenous Job Characteristic

    DEFF Research Database (Denmark)

    Jahn, Elke; Wagner, Thomas

    2008-01-01

    This paper develops a hedonic model of job security (JS). Workers with heterogeneous JSpreferences pay the hedonic price for JS to employers, who incur labor-hoarding costs from supplying JS. In contrast to the Wage-Bill Argument, equilibrium unemployment is strictly positive, as workers with wea...

  3. Grid Oriented Implementation of the Tephra Model

    Science.gov (United States)

    Coltelli, M.; D'Agostino, M.; Drago, A.; Pistagna, F.; Prestifilippo, M.; Reitano, D.; Scollo, S.; Spata, G.

    2009-04-01

    TEPHRA is a two dimensional advection-diffusion model implemented by Bonadonna et al. [2005] that describes the sedimentation process of particles from volcanic plumes. The model is used by INGV - Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Catania, to forecast tephra dispersion during Etna volcanic events. Every day weather forecast provided by the Italian Air Force Meteorological Office in Rome and by the hydrometeorological service of ARPA in Emilia Romagna are processed by TEPHRA model with other volcanological parameters to simulate two different eruptive scenarios of Mt. Etna (corresponding to 1998 and 2002-03 Etna eruptions). The model outputs are plotted on maps and transferred to Civil Protection which takes the trouble to give public warnings and plan mitigation measures. The TEPHRA model is implemented in ANSI-C code using MPI commands to maximize parallel computation. Actually the model runs on an INGV Beowulf cluster. In order to provide better performances we worked on porting it to PI2S2 sicilian grid infrastructure inside the "PI2S2 Project" (2006-2008). We configured the application to run on grid, using Glite middleware, analyzed the obtained performances and comparing them with ones obtained on the local cluster. As TEPHRA needs to be run in a short time in order to transfer fastly the dispersion maps to Civil Protection, we also worked to minimize and stabilize grid job-scheduling time by using customized high-priority queues called Emergency Queue.

  4. FIFE-Jobsub: a grid submission system for intensity frontier experiments at Fermilab

    Science.gov (United States)

    Box, Dennis

    2014-06-01

    The Fermilab Intensity Frontier Experiments use an integrated submission system known as FIFE-jobsub, part of the FIFE (Fabric for Frontier Experiments) initiative, to submit batch jobs to the Open Science Grid. FIFE-jobsub eases the burden on experimenters by integrating data transfer and site selection details in an easy to use and well-documented format. FIFE-jobsub automates tedious details of maintaining grid proxies for the lifetime of the grid job. Data transfer is handled using the Intensity Frontier Data Handling Client (IFDHC) [1] tool suite, which facilitates selecting the appropriate data transfer method from many possibilities while protecting shared resources from overload. Chaining of job dependencies into Directed Acyclic Graphs (Condor DAGS) is well supported and made easier through the use of input flags and parameters.

  5. FIFE-Jobsub: a grid submission system for intensity frontier experiments at Fermilab

    International Nuclear Information System (INIS)

    Box, Dennis

    2014-01-01

    The Fermilab Intensity Frontier Experiments use an integrated submission system known as FIFE-jobsub, part of the FIFE (Fabric for Frontier Experiments) initiative, to submit batch jobs to the Open Science Grid. FIFE-jobsub eases the burden on experimenters by integrating data transfer and site selection details in an easy to use and well-documented format. FIFE-jobsub automates tedious details of maintaining grid proxies for the lifetime of the grid job. Data transfer is handled using the Intensity Frontier Data Handling Client (IFDHC) [1] tool suite, which facilitates selecting the appropriate data transfer method from many possibilities while protecting shared resources from overload. Chaining of job dependencies into Directed Acyclic Graphs (Condor DAGS) is well supported and made easier through the use of input flags and parameters.

  6. For smart electric grids

    International Nuclear Information System (INIS)

    Tran Thiet, Jean-Paul; Leger, Sebastien; Bressand, Florian; Perez, Yannick; Bacha, Seddik; Laurent, Daniel; Perrin, Marion

    2012-01-01

    The authors identify and discuss the main challenges faced by the French electric grid: the management of electricity demand and the needed improvement of energy efficiency, the evolution of consumer's state of mind, and the integration of new production capacities. They notably outline that France have been living until recently with an electricity abundance, but now faces the highest consumption peaks in Europe, and is therefore facing higher risks of power cuts. They also notice that the French energy mix is slowly evolving, and outline the problems raised by the fact that renewable energies which are to be developed, are decentralised and intermittent. They propose an overview of present developments of smart grids, and outline their innovative characteristics, challenges raised by their development and compare international examples. They show that smart grids enable a better adapted supply and decentralisation. A set of proposals is formulated about how to finance and to organise the reconfiguration of electric grids, how to increase consumer's responsibility for peak management and demand management, how to create the conditions of emergence of a European market of smart grids, and how to support self-consumption and the building-up of an energy storage sector

  7. Grid and Entrepreneurship Workshop

    CERN Multimedia

    2006-01-01

    The CERN openlab is organising a special workshop about Grid opportunities for entrepreneurship. This one-day event will provide an overview of what is involved in spin-off technology, with a special reference to the context of computing and data Grids. Lectures by experienced entrepreneurs will introduce the key concepts of entrepreneurship and review, in particular, the industrial potential of EGEE (the EU co-funded Enabling Grids for E-sciencE project, led by CERN). Case studies will be given by CEOs of European start-ups already active in the Grid and computing cluster area, and regional experts will provide an overview of efforts in several European regions to stimulate entrepreneurship. This workshop is designed to encourage students and researchers involved or interested in Grid technology to consider the entrepreneurial opportunities that this technology may create in the coming years. This workshop is organized as part of the CERN openlab student programme, which is co-sponsored by CERN, HP, ...

  8. A History-based Estimation for LHCb job requirements

    CERN Document Server

    Rauschmayr, Nathalie

    2015-01-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the red...

  9. Security Challenges in Smart-Grid Metering and Control Systems

    Directory of Open Access Journals (Sweden)

    Xinxin Fan

    2013-07-01

    Full Text Available The smart grid is a next-generation power system that is increasingly attracting the attention of government, industry, and academia. It is an upgraded electricity network that depends on two-way digital communications between supplier and consumer that in turn give support to intelligent metering and monitoring systems. Considering that energy utilities play an increasingly important role in our daily life, smart-grid technology introduces new security challenges that must be addressed. Deploying a smart grid without adequate security might result in serious consequences such as grid instability, utility fraud, and loss of user information and energy-consumption data. Due to the heterogeneous communication architecture of smart grids, it is quite a challenge to design sophisticated and robust security mechanisms that can be easily deployed to protect communications among different layers of the smart grid-infrastructure. In this article, we focus on the communication-security aspect of a smart-grid metering and control system from the perspective of cryptographic techniques, and we discuss different mechanisms to enhance cybersecurity of the emerging smart grid. We aim to provide a comprehensive vulnerability analysis as well as novel insights on the cybersecurity of a smart grid.

  10. Optimizing the calculation grid for atmospheric dispersion modelling

    International Nuclear Information System (INIS)

    Van Thielen, S.; Turcanu, C.; Camps, J.; Keppens, R.

    2015-01-01

    This paper presents three approaches to find optimized grids for atmospheric dispersion measurements and calculations in emergency planning. This can be useful for deriving optimal positions for mobile monitoring stations, or help to reduce discretization errors and improve recommendations. Indeed, threshold-based recommendations or conclusions may differ strongly on the shape and size of the grid on which atmospheric dispersion measurements or calculations of pollutants are based. Therefore, relatively sparse grids that retain as much information as possible, are required. The grid optimization procedure proposed here is first demonstrated with a simple Gaussian plume model as adopted in atmospheric dispersion calculations, which provides fast calculations. The optimized grids are compared to the Noodplan grid, currently used for emergency planning in Belgium, and to the exact solution. We then demonstrate how it can be used in more realistic dispersion models. - Highlights: • Grid points for atmospheric dispersion calculations are optimized. • Using heuristics the optimization problem results into different grid shapes. • Comparison between optimized models and the Noodplan grid is performed

  11. A Framework for Counterfeit Smart Grid Device Detection

    Energy Technology Data Exchange (ETDEWEB)

    Babun, Leonardo [Florida Intl Univ., Miami, FL (United States); Aksu, Hidayet [Florida Intl Univ., Miami, FL (United States); Uluagac, A. Selcuk [Florida Intl Univ., Miami, FL (United States)

    2016-10-19

    The core vision of the smart grid concept is the realization of reliable two-­way communications between smart devices (e.g., IEDs, PLCs, PMUs). The benefits of the smart grid also come with tremendous security risks and new challenges in protecting the smart grid systems from cyber threats. Particularly, the use of untrusted counterfeit smart grid devices represents a real problem. Consequences of propagating false or malicious data, as well as stealing valuable user or smart grid state information from counterfeit devices are costly. Hence, early detection of counterfeit devices is critical for protecting smart grid’s components and users. To address these concerns, in this poster, we introduce our initial design of a configurable framework that utilize system call tracing, library interposition, and statistical techniques for monitoring and detection of counterfeit smart grid devices. In our framework, we consider six different counterfeit device scenarios with different smart grid devices and adversarial seZings. Our initial results on a realistic testbed utilizing actual smart-­grid GOOSE messages with IEC-­61850 communication protocol are very promising. Our framework is showing excellent rates on detection of smart grid counterfeit devices from impostors.

  12. Future electrical distribution grids: Smart Grids

    International Nuclear Information System (INIS)

    Hadjsaid, N.; Sabonnadiere, J.C.; Angelier, J.P.

    2010-01-01

    The new energy paradigm faced by distribution network represents a real scientific challenge. Thus, national and EU objectives in terms of environment and energy efficiency with resulted regulatory incentives for renewable energies, the deployment of smart meters and the need to respond to changing needs including new uses related to electric and plug-in hybrid vehicles introduce more complexity and favour the evolution towards a smarter grid. The economic interest group in Grenoble IDEA in connection with the power laboratory G2ELab at Grenoble Institute of technology, EDF and Schneider Electric are conducting research on the electrical distribution of the future in presence of distributed generation for ten years.Thus, several innovations emerged in terms of flexibility and intelligence of the distribution network. One can notice the intelligence solutions for voltage control, the tools of network optimization, the self-healing techniques, the innovative strategies for connecting distributed and intermittent generation or load control possibilities for the distributor. All these innovations are firmly in the context of intelligent networks of tomorrow 'Smart Grids'. (authors)

  13. A History-based Estimation for LHCb job requirements

    Science.gov (United States)

    Rauschmayr, Nathalie

    2015-12-01

    The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.

  14. The God of Job

    Directory of Open Access Journals (Sweden)

    Leonard Mare

    2012-02-01

    Full Text Available God is often portrayed extremely negatively in the Old Testament. For example, in the Book of Nahum God is pictured as being responsible for the most horrifying violence imaginable. This negative portrayal of God is also found in the Book of Job. God is responsible for the suffering that his righteous servant Job, has to endure. He is even manipulated by the satan to allow him free reign in attacking Job. God even acknowledges that the misery and pain inflicted on Job, was for no reason. Job�s children are killed in order for God to prove a point, and in his response to Job�s suffering, he doesn�t even address the issue of Job�s suffering. This is a picture of a very cruel, vicious God. This article investigates the negative, disturbing images of God in the Book of Job. Are these images of God who God really is, or is the God of Job a literary construct of the author? The focus of this study is on the prologue and epilogue to the book, as well as the speeches of God in Job 38�41.

  15. Use of DAGMan in CRAB3 to Improve the Splitting of CMS User Jobs

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, M. [Notre Dame U.; Mascheroni, M. [Fermilab; Woodard, A. [Notre Dame U.; Belforte, S. [INFN, Trieste; Bockelman, B. [Nebraska U.; Hernandez, J. M. [Madrid, CIEMAT; Vaandering, E. [Fermilab

    2017-11-22

    CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distribute the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.

  16. Use of DAGMan in CRAB3 to improve the splitting of CMS user jobs

    Science.gov (United States)

    Wolf, M.; Mascheroni, M.; Woodard, A.; Belforte, S.; Bockelman, B.; Hernandez, J. M.; Vaandering, E.

    2017-10-01

    CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distribute the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.

  17. Performance engineering in data Grids

    CERN Document Server

    Laure, Erwin; Stockinger, Kurt

    2005-01-01

    The vision of Grid computing is to facilitate worldwide resource sharing among distributed collaborations. With the help of numerous national and international Grid projects, this vision is becoming reality and Grid systems are attracting an ever increasing user base. However, Grids are still quite complex software systems whose efficient use is a difficult and error-prone task. In this paper we present performance engineering techniques that aim to facilitate an efficient use of Grid systems, in particular systems that deal with the management of large-scale data sets in the tera- and petabyte range (also referred to as data Grids). These techniques are applicable at different layers of a Grid architecture and we discuss the tools required at each of these layers to implement them. Having discussed important performance engineering techniques, we investigate how major Grid projects deal with performance issues particularly related to data Grids and how they implement the techniques presented.

  18. Grid sleeve bulge tool

    International Nuclear Information System (INIS)

    Phillips, W.D.; Vaill, R.E.

    1980-01-01

    An improved grid sleeve bulge tool is designed for securing control rod guide tubes to sleeves brazed in a fuel assembly grid. The tool includes a cylinder having an outer diameter less than the internal diameter of the control rod guide tubes. The walls of the cylinder are cut in an axial direction along its length to provide several flexible tines or ligaments. These tines are similar to a fork except they are spaced in a circumferential direction. The end of each alternate tine is equipped with a semispherical projection which extends radially outwardly from the tine surface. A ram or plunger of generally cylindrical configuration and about the same length as the cylinder is designed to fit in and move axially of the cylinder and thereby force the tined projections outwardly when the ram is pulled into the cylinder. The ram surface includes axially extending grooves and plane surfaces which are complimentary to the inner surfaces formed on the tines on the cylinder. As the cylinder is inserted into a control rod guide tube, and the projections on the cylinder placed in a position just below or above a grid strap, the ram is pulled into the cylinder, thus moving the tines and the projections thereon outwardly into contact with the sleeve, to plastically deform both the sleeve and the control rod guide tube, and thereby form four bulges which extend outwardly from the sleeve surface and beyond the outer periphery of the grid peripheral strap. This process is then repeated at the points above the grid to also provide for outwardly projecting surfaces, the result being that the grid is accurately positioned on and mechanically secured to the control rod guide tubes which extend the length of a fuel assembly

  19. BESIII and SuperB: distributed job management with Ganga

    International Nuclear Information System (INIS)

    Antoniev, I; Kenyon, M; Moscicki, J; Deng, Z; Han, Y; Zhang, X; Ebke, J; Egede, U; Richards, A; Fella, A; Galvani, A; Lin, L; Luppi, E; Manzali, M; Tomassetti, L; Nicholson, C; Slater, M; Spinoso, V

    2012-01-01

    A job submission and management tool is one of the necessary components in any distributed computing system. Such a tool should provide a user-friendly interface for physics production groups and ordinary analysis users to access heterogeneous computing resources, without requiring knowledge of the underlying grid middleware. Ganga, with its common framework and customizable plug-in structure is such a tool. This paper will describe how experiment-specific job management tools for BESIII and SuperB were developed as Ganga plug-ins to meet their own unique requirements, discuss and contrast their challenges met and lessons learned.

  20. Analysis and improvement of security of energy smart grids

    International Nuclear Information System (INIS)

    Halimi, Halim

    2014-01-01

    The Smart grid is the next generation power grid, which is a new self-healing, self-activating form of electricity network, and integrates power-flow control, increased quality of electricity, and energy reliability, energy efficiency and energy security using information and communication technologies. Communication networks play a critical role in smart grid, as the intelligence of smart grid is built based on information exchange across the power grid. Its two-way communication and electricity flow enable to monitor, predict and manage the energy usage. To upgrade an existing power grid into a smart grid, it requires an intelligent and secure communication infrastructure. Because of that, the main goal of this dissertation is to propose new architecture and implementation of algorithms for analysis and improvement of the security and reliability in smart grid. In power transmission segments of smart grid, wired communications are usually adopted to ensure robustness of the backbone power network. In contrast, for a power distribution grid, wireless communications provide many benefits such as low cost high speed links, easy setup of connections among different devices/appliances, and so on. Wireless communications are usually more vulnerable to security attacks than wired ones. Developing appropriate wireless communication architecture and its security measures is extremely important for a smart grid system. This research addresses physical layer security in a Wireless Smart Grid. Hence a defense Quorum- based algorithm is proposed to ensure physical security in wireless communication. The new security architecture for smart grid that supports privacy-preserving, data aggregation and access control is defined. This architecture consists of two parts. In the first part we propose to use an efficient and privacy-preserving aggregation scheme (EPPA), which aggregates real-time data of consumers by Local Gateway. During aggregation the privacy of consumers is

  1. Thermal Anemometry Grid Sensor.

    Science.gov (United States)

    Arlit, Martin; Schleicher, Eckhard; Hampel, Uwe

    2017-07-19

    A novel thermal anemometry grid sensor was developed for the simultaneous measurement of cross-sectional temperature and axial velocity distribution in a fluid flow. The sensor consists of a set of platinum resistors arranged in a regular grid. Each platinum resistor allows the simultaneous measurement of fluid temperature via electrical resistance and flow velocity via constant voltage thermal anemometry. Cross-sectional measurement was enabled by applying a special multiplexing-excitation scheme. In this paper, we present the design and characterization of a prototypical sensor for measurements in a range of very low velocities.

  2. Grids, Clouds and Virtualization

    CERN Document Server

    Cafaro, Massimo

    2011-01-01

    Research into grid computing has been driven by the need to solve large-scale, increasingly complex problems for scientific applications. Yet the applications of grid computing for business and casual users did not begin to emerge until the development of the concept of cloud computing, fueled by advances in virtualization techniques, coupled with the increased availability of ever-greater Internet bandwidth. The appeal of this new paradigm is mainly based on its simplicity, and the affordable price for seamless access to both computational and storage resources. This timely text/reference int

  3. Instant jqGrid

    CERN Document Server

    Manricks, Gabriel

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. A step-by-step, practical Starter book, Instant jqGrid embraces you while you take your first steps, and introduces you to the content in an easy-to-follow order.This book is aimed at people who have some knowledge of HTML and JavaScript. Knowledge of PHP and SQL would also prove to be beneficial. No prior knowledge of jqGrid is expected.

  4. Distributed photovoltaic grid transformers

    CERN Document Server

    Shertukde, Hemchandra Madhusudan

    2014-01-01

    The demand for alternative energy sources fuels the need for electric power and controls engineers to possess a practical understanding of transformers suitable for solar energy. Meeting that need, Distributed Photovoltaic Grid Transformers begins by explaining the basic theory behind transformers in the solar power arena, and then progresses to describe the development, manufacture, and sale of distributed photovoltaic (PV) grid transformers, which help boost the electric DC voltage (generally at 30 volts) harnessed by a PV panel to a higher level (generally at 115 volts or higher) once it is

  5. Situation Management Over the Smart Grid

    Directory of Open Access Journals (Sweden)

    Aleksey V. Kychkin

    2015-03-01

    Full Text Available The article encloses a systematization of energy monitoring and management methods for increasing the efficiency of the Smart Grid on basis of the situation management. The Situation Management is determined as a binding process between strategic and tactical administrations of the energy consumption. Also Situation Management may be considered as an integration of two technologies of energy savings: one of one’s is based on the systems of energy management and another technology is a SmartGrid. The work was performed as part of the Russian Federation grant MK-5279.2014.8 "Synthesis of efficient technologies for remote monitoring and managing of intellectual power system with active-adaptive network."

  6. The Job Training and Job Satisfaction Survey Technical Manual

    Science.gov (United States)

    Schmidt, Steven W.

    2004-01-01

    Job training has become an important aspect of an employee's overall job experience. However, it is not often called out specifically on instruments measuring job satisfaction. This technical manual details the processes used in the development and validation of a survey instrument to measure job training satisfaction and overall job…

  7. Job anxiety, organizational commitment and job satisfaction: An ...

    African Journals Online (AJOL)

    Job anxiety, organizational commitment and job satisfaction: An empirical assessment of supervisors in the state of Eritrea. ... The findings of the present research revealed that (i) recognition and self-esteem facets of job anxiety were found to be significantly related to job satisfaction, (ii) facets of organizational commitment ...

  8. Reciprocal relationships between job demands, job resources, and recovery opportunities

    NARCIS (Netherlands)

    A. Rodríguez-Muñoz (Alfredo); A.I. Sanz-Vergel (Ana Isabel); E. Demerouti (Eva); A.B. Bakker (Arnold)

    2012-01-01

    textabstractThe aim of this study was to explore longitudinal relationships between job demands, job resources, and recovery opportunities. On the basis of the Job Demands-Resources model and Conservation of Resources theory we hypothesized that we would find reciprocal relations between job

  9. The Relationship of Job Involvement, Motivation and Job ...

    African Journals Online (AJOL)

    North–West geo–political zone of Nigeria should implement work motivational strategies such as good salary package, regular staff training and development, job security and provision of adequate working materials to enhance job involvement of the respondents. KEYWORDS: Job Involvement; Job Satisfaction; Motivation; ...

  10. Assessment of job satisfaction, job stress and psychological health ...

    African Journals Online (AJOL)

    Background: The relationship that exists between job stress and job satisfaction has been investigated across several professional groups. Aim: The study assessed the job satisfaction, perception of job stress and psychological morbidity among journalists in a state in the Southern part of Nigeria. Methods: The ...

  11. GridRun: A lightweight packaging and execution environment forcompact, multi-architecture binaries

    Energy Technology Data Exchange (ETDEWEB)

    Shalf, John; Goodale, Tom

    2004-02-01

    GridRun offers a very simple set of tools for creating and executing multi-platform binary executables. These ''fat-binaries'' archive native machine code into compact packages that are typically a fraction the size of the original binary images they store, enabling efficient staging of executables for heterogeneous parallel jobs. GridRun interoperates with existing distributed job launchers/managers like Condor and the Globus GRAM to greatly simplify the logic required launching native binary applications in distributed heterogeneous environments.

  12. Virtual Execution Environments and the Negotiation of Service Level Agreements in Grid Systems

    Science.gov (United States)

    Battré, Dominic; Hovestadt, Matthias; Keller, Axel; Kao, Odej; Voss, Kerstin

    Service Level Agreements (SLAs) have focal importance if the commercial customer should be attracted to the Grid. An SLA-aware resource management system has already been realize, able to fulfill the SLA of jobs even in the case of resource failures. For this, it is able to migrate checkpointed jobs over the Grid. At this, virtual execution environments allow to increase the number of potential migration targets significantly. In this paper we outline the concept of such virtual execution environments and focus on the SLA negotiation aspects.

  13. CDF GlideinWMS usage in Grid computing of high energy physics

    International Nuclear Information System (INIS)

    Zvada, Marian; Sfiligoi, Igor; Benjamin, Doug

    2010-01-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  14. Measurement and simulation of the performance of high energy physics data grids

    Science.gov (United States)

    Crosby, Paul Andrew

    This thesis describes a study of resource brokering in a computational Grid for high energy physics. Such systems are being devised in order to manage the unprecedented workload of the next generation particle physics experiments such as those at the Large Hadron Collider. A simulation of the European Data Grid has been constructed, and calibrated using logging data from a real Grid testbed. This model is then used to explore the Grid's middleware configuration, and suggest improvements to its scheduling policy. The expansion of the simulation to include data analysis of the type conducted by particle physicists is then described. A variety of job and data management policies are explored, in order to determine how well they meet the needs of physicists, as well as how efficiently they make use of CPU and network resources. Appropriate performance indicators are introduced in order to measure how well jobs and resources are managed from different perspectives. The effects of inefficiencies in Grid middleware are explored, as are methods of compensating for them. It is demonstrated that a scheduling algorithm should alter its weighting on load balancing and data distribution, depending on whether data transfer or CPU requirements dominate, and also on the level of job loading. It is also shown that an economic model for data management and replication can improve the efficiency of network use and job processing.

  15. The Role of Grid Computing Technologies in Cloud Computing

    Science.gov (United States)

    Villegas, David; Rodero, Ivan; Fong, Liana; Bobroff, Norman; Liu, Yanbin; Parashar, Manish; Sadjadi, S. Masoud

    The fields of Grid, Utility and Cloud Computing have a set of common objectives in harnessing shared resources to optimally meet a great variety of demands cost-effectively and in a timely manner Since Grid Computing started its technological journey about a decade earlier than Cloud Computing, the Cloud can benefit from the technologies and experience of the Grid in building an infrastructure for distributed computing. Our comparison of Grid and Cloud starts with their basic characteristics and interaction models with clients, resource consumers and providers. Then the similarities and differences in architectural layers and key usage patterns are examined. This is followed by an in depth look at the technologies and best practices that have applicability from Grid to Cloud computing, including scheduling, service orientation, security, data management, monitoring, interoperability, simulation and autonomic support. Finally, we offer insights on how these techniques will help solve the current challenges faced by Cloud computing.

  16. A smart grid simulation testbed using Matlab/Simulink

    Science.gov (United States)

    Mallapuram, Sriharsha; Moulema, Paul; Yu, Wei

    2014-06-01

    The smart grid is the integration of computing and communication technologies into a power grid with a goal of enabling real time control, and a reliable, secure, and efficient energy system [1]. With the increased interest of the research community and stakeholders towards the smart grid, a number of solutions and algorithms have been developed and proposed to address issues related to smart grid operations and functions. Those technologies and solutions need to be tested and validated before implementation using software simulators. In this paper, we developed a general smart grid simulation model in the MATLAB/Simulink environment, which integrates renewable energy resources, energy storage technology, load monitoring and control capability. To demonstrate and validate the effectiveness of our simulation model, we created simulation scenarios and performed simulations using a real-world data set provided by the Pecan Street Research Institute.

  17. Optimal file-bundle caching algorithms for data-grids

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow; Rotem, Doron; Romosan, Alexandru

    2004-04-24

    The file-bundle caching problem arises frequently in scientific applications where jobs need to process several files simultaneously. Consider a host system in a data-grid that maintains a staging disk or disk cache for servicing jobs of file requests. In this environment, a job can only be serviced if all its file requests are present in the disk cache. Files must be admitted into the cache or replaced in sets of file-bundles, i.e. the set of files that must all be processed simultaneously. In this paper we show that traditional caching algorithms based on file popularity measures do not perform well in such caching environments since they are not sensitive to the inter-file dependencies and may hold in the cache non-relevant combinations of files. We present and analyze a new caching algorithm for maximizing the throughput of jobs and minimizing data replacement costs to such data-grid hosts. We tested the new algorithm using a disk cache simulation model under a wide range of conditions such as file request distributions, relative cache size, file size distribution, etc. In all these cases, the results show significant improvement as compared with traditional caching algorithms.

  18. Grid computing the European Data Grid Project

    CERN Document Server

    Segal, B; Gagliardi, F; Carminati, F

    2000-01-01

    The goal of this project is the development of a novel environment to support globally distributed scientific exploration involving multi- PetaByte datasets. The project will devise and develop middleware solutions and testbeds capable of scaling to handle many PetaBytes of distributed data, tens of thousands of resources (processors, disks, etc.), and thousands of simultaneous users. The scale of the problem and the distribution of the resources and user community preclude straightforward replication of the data at different sites, while the aim of providing a general purpose application environment precludes distributing the data using static policies. We will construct this environment by combining and extending newly emerging "Grid" technologies to manage large distributed datasets in addition to computational elements. A consequence of this project will be the emergence of fundamental new modes of scientific exploration, as access to fundamental scientific data is no longer constrained to the producer of...

  19. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    Science.gov (United States)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  20. Grid-Brick Event Processing Framework in GEPS

    CERN Document Server

    Amorim, A; Fei, H; Almeida, N; Trezentos, P; Villate, J E; Amorim, Antonio; Pedro, Luis; Fei, Han; Almeida, Nuno; Trezentos, Paulo; Villate, Jaime E.

    2003-01-01

    Experiments like ATLAS at LHC involve a scale of computing and data management that greatly exceeds the capability of existing systems, making it necessary to resort to Grid-based Parallel Event Processing Systems (GEPS). Traditional Grid systems concentrate the data in central data servers which have to be accessed by many nodes each time an analysis or processing job starts. These systems require very powerful central data servers and make little use of the distributed disk space that is available in commodity computers. The Grid-Brick system, which is described in this paper, follows a different approach. The data storage is split among all grid nodes having each one a piece of the whole information. Users submit queries and the system will distribute the tasks through all the nodes and retrieve the result, merging them together in the Job Submit Server. The main advantage of using this system is the huge scalability it provides, while its biggest disadvantage appears in the case of failure of one of the n...

  1. Decommissioning and jobs

    International Nuclear Information System (INIS)

    John, B.S.

    1990-01-01

    One aspect of the decommissioning web is its effect on socioeconomics, particularly jobs. What will reactor retirement mean to jobs, especially in rural communities where power plant operations may be the most reliable and dominant source of direct and indirect employment in the area? The problems which any plant closure produces for job security are generally understood, but the decommissioning of nuclear power plants is different because of the residual radioactivity and because of the greater isolation of the power plant sites. For example, what will be the specific employment effects of several possible decommissioning scenarios such as immediate dismantlement and delayed dismantlement? The varying effects of decommissioning on jobs is discussed. It is concluded that the decommissioning of nuclear power plants in some areas such as Wales could bring benefits to the surrounding communities. (author)

  2. Learning about Job Search

    DEFF Research Database (Denmark)

    Altmann, Steffen; Falk, Armin; Jäger, Simon

    We conduct a large-scale field experiment in the German labor market to investigate how information provision affects job seekers’ employment prospects and labor market outcomes. Individuals assigned to the treatment group of our experiment received a brochure that informed them about job search...... strategies and the consequences of unemployment, and motivated them to actively look for new employment. We study the causal impact of the brochure by comparing labor market outcomes of treated and untreated job seekers in administrative data containing comprehensive information on individuals’ employment...... status and earnings. While our treatment yields overall positive effects, these tend to be concentrated among job seekers who are at risk of being unemployed for an extended period of time. Specifically, the treatment effects in our overall sample are moderately positive but mostly insignificant...

  3. Management job ads

    DEFF Research Database (Denmark)

    Holmgreen, Lise-Lotte

    2014-01-01

    The article asks whether it is not the responsibility of corporations to address the issue of women being underrepresented in Danish management jobs. In other words, it is argued that corporations should be encouraged to engage more actively in the recruitment of both men and women for management...... jobs by discursively constructing job ads that appeal to both sexes. This argument is part of the broader field of corporate social responsibility, corporate citizenship, and stakeholder management, which involves discussions of the obligations of corporations to acknowledge and mitigate...... the increasingly widespread impact that their activities have on communities and social structures. The article emphasises the need for more active engagement on the part of corporations by analysing the discursive construction of preferred candidates in a small sample of Danish management job ads. By means...

  4. Changing from computing grid to knowledge grid in life-science grid.

    Science.gov (United States)

    Talukdar, Veera; Konar, Amit; Datta, Ayan; Choudhury, Anamika Roy

    2009-09-01

    Grid computing has a great potential to become a standard cyber infrastructure for life sciences that often require high-performance computing and large data handling, which exceeds the computing capacity of a single institution. Grid computer applies the resources of many computers in a network to a single problem at the same time. It is useful to scientific problems that require a great number of computer processing cycles or access to a large amount of data.As biologists,we are constantly discovering millions of genes and genome features, which are assembled in a library and distributed on computers around the world.This means that new, innovative methods must be developed that exploit the re-sources available for extensive calculations - for example grid computing.This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing a "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. By extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  5. Rural nurse job satisfaction.

    Science.gov (United States)

    Molinari, D L; Monserud, M A

    2008-01-01

    The lack of rural nursing studies makes it impossible to know whether rural and urban nurses perceive personal and organizational factors of job satisfaction similarly. Few reports of rural nurse job satisfaction are available. Since the unprecedented shortage of qualified rural nurses requires a greater understanding of what factors are important to retention, studies are needed. An analysis of the literature indicates job satisfaction is studied as both an independent and dependent variable. In this study, the concept is used to examine the intention to remain employed by measuring individual and organizational characteristics; thus, job satisfaction is used as a dependent variable. One hundred and three rural hospital nurses, from hospitals throughout the Northwest region of the United States were recruited for the study. Only nurses employed for more than one year were accepted. The sample completed surveys online. The McCloskey/Mueller Satisfaction Scale, the Gerber Control Over Practice Scale, and two open-ended job satisfaction questions were completed. The qualitative analysis of the open-ended questions identified themes which were then used to support the quantitative findings. Overall alphas were 0.89 for the McCloskey/Mueller Scale and 0.96 for the Gerber Control Over Practice Scale. Rural nurses indicate a preference for rural lifestyles and the incorporation of rural values in organizational practices. Nurses preferred the generalist role with its job variability, and patient variety. Most participants intended to remain employed. The majority of nurses planning to leave employment were unmarried, without children at home, and stated no preference for a rural lifestyle. The least overall satisfied nurses in the sample were employed from 1 to 3 years. Several new findings inform the literature while others support previous workforce studies. Data suggest some job satisfaction elements can be altered by addressing organizational characteristics and by

  6. Grid attacks avian flu

    CERN Multimedia

    2006-01-01

    During April, a collaboration of Asian and European laboratories analysed 300,000 possible drug components against the avian flu virus H5N1 using the EGEE Grid infrastructure. Schematic presentation of the avian flu virus.The distribution of the EGEE sites in the world on which the avian flu scan was performed. The goal was to find potential compounds that can inhibit the activities of an enzyme on the surface of the influenza virus, the so-called neuraminidase, subtype N1. Using the Grid to identify the most promising leads for biological tests could speed up the development process for drugs against the influenza virus. Co-ordinated by CERN and funded by the European Commission, the EGEE project (Enabling Grids for E-sciencE) aims to set up a worldwide grid infrastructure for science. The challenge of the in silico drug discovery application is to identify those molecules which can dock on the active sites of the virus in order to inhibit its action. To study the impact of small scale mutations on drug r...

  7. Bolivian Bouguer Anomaly Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 1 kilometer Bouguer anomaly grid for the country of Bolivia.Number of columns is 550 and number of rows is 900. The order of the data is from the lower left to the...

  8. Nevada Bouguer Gravity Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 2 kilometer Bouguer anomaly grid for the state of Nevada. Number of columns is 282 and number of rows is 397. The order of the data is from the lower left to the...

  9. Molecular Grid Membranes

    National Research Council Canada - National Science Library

    Michl, Josef; Magnera, Thomas

    2008-01-01

    ...) porphyrin triply linked in the meso-meso, and both beta-beta positions four times by carbon-carbon bonds to each of its neighbors to form porphite sheets a grid-type material that would be an analog of graphene...

  10. NSTAR Smart Grid Pilot

    Energy Technology Data Exchange (ETDEWEB)

    Rabari, Anil [NSTAR Electric, Manchester, NH (United States); Fadipe, Oloruntomi [NSTAR Electric, Manchester, NH (United States)

    2014-03-31

    NSTAR Electric & Gas Corporation (“the Company”, or “NSTAR”) developed and implemented a Smart Grid pilot program beginning in 2010 to demonstrate the viability of leveraging existing automated meter reading (“AMR”) deployments to provide much of the Smart Grid functionality of advanced metering infrastructure (“AMI”), but without the large capital investment that AMI rollouts typically entail. In particular, a central objective of the Smart Energy Pilot was to enable residential dynamic pricing (time-of-use “TOU” and critical peak rates and rebates) and two-way direct load control (“DLC”) by continually capturing AMR meter data transmissions and communicating through customer-sited broadband connections in conjunction with a standardsbased home area network (“HAN”). The pilot was supported by the U.S. Department of Energy’s (“DOE”) through the Smart Grid Demonstration program. NSTAR was very pleased to not only receive the funding support from DOE, but the guidance and support of the DOE throughout the pilot. NSTAR is also pleased to report to the DOE that it was able to execute and deliver a successful pilot on time and on budget. NSTAR looks for future opportunities to work with the DOE and others in future smart grid projects.

  11. Multi-Grid Lanczos

    Science.gov (United States)

    Clark, M. A.; Jung, Chulwoo; Lehner, Christoph

    2018-03-01

    We present a Lanczos algorithm utilizing multiple grids that reduces the memory requirements both on disk and in working memory by one order of magnitude for RBC/UKQCD's 48I and 64I ensembles at the physical pion mass. The precision of the resulting eigenvectors is on par with exact deflation.

  12. Multi-Grid Lanczos

    Directory of Open Access Journals (Sweden)

    Clark M. A.

    2018-01-01

    Full Text Available We present a Lanczos algorithm utilizing multiple grids that reduces the memory requirements both on disk and in working memory by one order of magnitude for RBC/UKQCD’s 48I and 64I ensembles at the physical pion mass. The precision of the resulting eigenvectors is on par with exact deflation.

  13. Modelling Chinese Smart Grid

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    In this document, we consider a specific Chinese Smart Grid implementation and try to address the verification problem for certain quantitative properties including performance and battery consumption. We employ stochastic model checking approach and present our modelling and analysis study using...

  14. Power grids; Reseaux electriques

    Energy Technology Data Exchange (ETDEWEB)

    Viterbo, J.

    2012-03-15

    The implementation of renewable energies represents new challenges for electrical systems. The objective: making power grids smarter so they can handle intermittent production. The advent of smart grids will allow flexible operations like distributing energy in a multidirectional manner instead of just one way and it will make electrical systems capable of integrating actions by different users, consumers and producers in order to maintain efficient, sustainable, economical and secure power supplies. Practically speaking, they associate sensors, instrumentation and controls with information processing and communication systems in order to create massively automated networks. Smart grids require huge investments: for example more than 7 billion dollars have been invested in China and in the Usa in 2010 and France is ranked 9. worldwide with 265 million dollars invested. It is expected that smart grids will promote the development of new business models and a change in the value chain for energy. Decentralized production combined with the probable introduction of more or less flexible rates for sales or purchases and of new supplier-customer relationships will open the way to the creation of new businesses. (A.C.)

  15. Kids Enjoy Grids

    CERN Multimedia

    2007-01-01

    I want to come back and work here when I'm older,' was the spontaneous reaction of one of the children invited to CERN by the Enabling Grids for E-sciencE project for a 'Grids for Kids' day at the end of January. The EGEE project is led by CERN, and the EGEE gender action team organized the day to introduce children to grid technology at an early age. The school group included both boys and girls, aged 9 to 11. All of the presenters were women. 'In general, before this visit, the children thought that scientists always wore white coats and were usually male, with wild Einstein-like hair,' said Jackie Beaver, the class's teacher at the Institut International de Lancy, a school near Geneva. 'They were surprised and pleased to see that women became scientists, and that scientists were quite 'normal'.' The half-day event included presentations about why Grids are needed, a visit of the computer centre, some online games, and plenty of time for questions. In the end, everyone agreed that it was a big success a...

  16. Jobs and welfare in Mozambique

    DEFF Research Database (Denmark)

    Jones, Sam; Tarp, Finn

    , this study focuses on labour market trends. We ask: (a) what has happened to jobs in Mozambique over the past 15 years; (b) what has been the link between jobs and development outcomes; and (c) where should policymakers focus to create more good jobs? We conclude that jobs policy must seek to raise...

  17. Job satisfaction of older workers

    NARCIS (Netherlands)

    Maassen van den Brink, H.; Groot, W.J.N.

    1999-01-01

    Using data for The Netherlands, this paper analyzes the relation between allocation, wages and job satisfaction. Five conclusions emerge from the empirical analysis: satisfaction with the job content is the main factor explaining overall job satisfaction; the effects of individual and job

  18. A Taxonomy on Accountability and Privacy Issues in Smart Grids

    Science.gov (United States)

    Naik, Ameya; Shahnasser, Hamid

    2017-07-01

    Cyber-Physical Systems (CPS) are combinations of computation, networking, and physical processes. Embedded computers and networks monitor control the physical processes, which affect computations and vice versa. Two applications of cyber physical systems include health-care and smart grid. In this paper, we have considered privacy aspects of cyber-physical system applicable to smart grid. Smart grid in collaboration with different stockholders can help in the improvement of power generation, communication, circulation and consumption. The proper management with monitoring feature by customers and utility of energy usage can be done through proper transmission and electricity flow; however cyber vulnerability could be increased due to an increased assimilation and linkage. This paper discusses various frameworks and architectures proposed for achieving accountability in smart grids by addressing privacy issues in Advance Metering Infrastructure (AMI). This paper also highlights additional work needed for accountability in more precise specifications such as uncertainty or ambiguity, indistinct, unmanageability, and undetectably.

  19. Job crafting: Towards a new model of individual job redesign

    Directory of Open Access Journals (Sweden)

    Maria Tims

    2010-12-01

    Research purpose: The purpose of the study was to fit job crafting in job design theory. Motivation for the study: The study was an attempt to shed more light on the types of proactive behaviours of individual employees at work. Moreover, we explored the concept of job crafting and its antecedents and consequences. Research design, approach and method: A literature study was conducted in which the focus was first on proactive behaviour of the employee and then on job crafting. Main findings: Job crafting can be seen as a specific form of proactive behaviour in which the employee initiates changes in the level of job demands and job resources. Job crafting may be facilitated by job and individual characteristics and may enable employees to fit their jobs to their personal knowledge, skills and abilities on the one hand and to their preferences and needs on the other hand. Practical/managerial implications: Job crafting may be a good way for employees to improve their work motivation and other positive work outcomes. Employees could be encouraged to exert more influence on their job characteristics. Contribution/value-add: This article describes a relatively new perspective on active job redesign by the individual, called job crafting, which has important implications for job design theories.

  20. Allegheny County Map Index Grid

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Map Index Sheets from Block and Lot Grid of Property Assessment and based on aerial photography, showing 1983 datum with solid line and NAD 27 with 5 second grid...

  1. HIRENASD Unstructured Grids - Centaur software

    Data.gov (United States)

    National Aeronautics and Space Administration — These grids were constructed using Centaur software at DLR in Germany. The grids designed for node based (labeled 'cv') and cell-centered solvers (labeled 'cc') are...

  2. Guest Editorial Special Issue on Power Quality in Smart Grids

    DEFF Research Database (Denmark)

    Guerrero, Josep M.

    2017-01-01

    -healing from power disturbances, efficient energy management, automation based on ICT and advanced metering infrastructures (smart metering), integration of distributed power generation, renewable energy resources and storage units as well as high power quality and reliability. In this regard, the concept......Smart grid (SG) is usually described as a power system utilizing information and communication technology (ICT) and advanced monitoring systems to improve the grid performance and offer a wide range of additional services for the consumers. Some of the main features of such a grid are self...

  3. JOB SATISFACTION OF UNIVERSITY GRADUATES

    OpenAIRE

    ALEKSANDER KUCEL; MONTSERRAT VILALTA-BUFÍ

    2013-01-01

    This paper investigates the determinants of job satisfaction of university graduates in Spain. We base our analysis on Locke’s discrepancy theory [Locke (1969)] and decompose subjective evaluation of job characteristics into surplus and deficit levels. We also study the importance of overeducation and over-skilling on job satisfaction. We use REFLEX data, a survey of university graduates. We conclude that job satisfaction is mostly determined by the subjective evaluation of intrinsic job char...

  4. Job Satisfaction of Nursing Managers

    OpenAIRE

    Petrosova, Liana; Pokhilenko, Irina

    2015-01-01

    The aim of the study was to research levels of job satisfaction, factors affecting job satisfaction/dissatisfaction, and ways to improve job satisfaction among nursing managers. The purposes of the study were to extend knowledge in the field of healthcare management, to raise awareness about factors that affect job satisfaction in nursing management career, and to provide suggestions regarding how to increase job satisfaction among nursing managers. The method of this study is literature r...

  5. Communication technologies in smart grid

    Directory of Open Access Journals (Sweden)

    Miladinović Nikola

    2013-01-01

    Full Text Available The role of communication technologies in Smart Grid lies in integration of large number of devices into one telecommunication system. This paper provides an overview of the technologies currently in use in electric power grid, that are not necessarily in compliance with the Smart Grid concept. Considering that the Smart Grid is open to the flow of information in all directions, it is necessary to provide reliability, protection and security of information.

  6. The impact of job crafting on job demands, job resources, and well-being

    NARCIS (Netherlands)

    Tims, M.; Bakker, A.B.; Derks, D.

    2013-01-01

    This longitudinal study examined whether employees can impact their own well-being by crafting their job demands and resources. Based on the Job Demands-Resources model, we hypothesized that employee job crafting would have an impact on work engagement, job satisfaction, and burnout through changes

  7. Pyramid solar micro-grid

    Science.gov (United States)

    Huang, Bin-Juine; Hsu, Po-Chien; Wang, Yi-Hung; Tang, Tzu-Chiao; Wang, Jia-Wei; Dong, Xin-Hong; Hsu, Hsin-Yi; Li, Kang; Lee, Kung-Yen

    2018-03-01

    A novel pyramid solar micro-grid is proposed in the present study. All the members within the micro-grid can mutually share excess solar PV power each other through a binary-connection hierarchy. The test results of a 2+2 pyramid solar micro-grid consisting of 4 individual solar PV systems for self-consumption are reported.

  8. Does low job satisfaction lead to job mobility?

    DEFF Research Database (Denmark)

    Kristensen, Nicolai; Westergård-Nielsen, Niels Chr.

    This paper seeks to analyse the role of job satisfaction and actual job change behaviour. The analysis is based on the European Community Household Panel (ECHP) data for Danish families 1994-2000. The results show that inclusion of job satisfaction, which is a subjective measure, does improve...... the ability to predict actual quit behaviour: Low overall job satisfaction significantly increases the probability of quit. Various job satisfaction domains are ranked according to their ability to predict quits. Satisfaction with Type of Work is found to be the most important job characteristic while...... satisfaction with Job Security is found to be insignificant. These results hold across age, gender and education sub-groups and are opposed to results for UK, where job security is found to be the most important job domain. This discrepancy between UK and Denmark might be due to differences in unemployment...

  9. Grid regulation services for energy storage devices based on grid frequency

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, Richard M.; Hammerstrom, Donald J.; Kintner-Meyer, Michael C. W.; Tuffner, Francis K.

    2017-09-05

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  10. Grid regulation services for energy storage devices based on grid frequency

    Science.gov (United States)

    Pratt, Richard M; Hammerstrom, Donald J; Kintner-Meyer, Michael C.W.; Tuffner, Francis K

    2013-07-02

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  11. MICROARRAY IMAGE GRIDDING USING GRID LINE REFINEMENT TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-05-01

    Full Text Available An important stage in microarray image analysis is gridding. Microarray image gridding is done to locate sub arrays in a microarray image and find co-ordinates of spots within each sub array. For accurate identification of spots, most of the proposed gridding methods require human intervention. In this paper a fully automatic gridding method which enhances spot intensity in the preprocessing step as per a histogram based threshold method is used. The gridding step finds co-ordinates of spots from horizontal and vertical profile of the image. To correct errors due to the grid line placement, a grid line refinement technique is proposed. The algorithm is applied on different image databases and results are compared based on spot detection accuracy and time. An average spot detection accuracy of 95.06% depicts the proposed method’s flexibility and accuracy in finding the spot co-ordinates for different database images.

  12. Reliable Grid Condition Detection and Control of Single-Phase Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai

    to the utility grid but also to sustain it. This thesis was divided into two main parts, namely "Grid Condition Detection" and "Control of Single-Phase DPGS". In the first part, the main focus was on reliable Phase Locked Loop (PLL) techniques for monitoring the grid voltage and on grid impedance estimation...... of the entire system. Regarding the advance control of DPGS, an active damping technique for grid-connected systems using inductor-capacitorinductor (LCL) filters was proposed in the thesis. The method is based on a notch filter, whose stopband can be automatically adjusted in relation with an estimated value...

  13. Offset rejection for PLL based synchronization in grid-connected converters

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Teodorescu, Remus; Agelidis, Vassilios

    2008-01-01

    in the measured grid voltage. This voltage offset is typically introduced by the measurements and data conversion processes and causes errors for the estimated parameters of the grid voltage. Accordingly, this paper presents an offset rejection method for grid-connected converters based on a phase-lockedloop (PLL......Grid-connected converters rely on fast and accurate detection of the phase angle, amplitude and frequency of the utility voltage to guarantee the correct generation of the reference signals. An important issue associated with accurate grid voltage monitoring is the presence of an offset...

  14. Job Hunting, Introduction

    Science.gov (United States)

    Goldin, Ed; Stringer, Susan

    1998-05-01

    The AAS is again sponsoring a career workshop for Astronomers seeking employment. The workshop will cover a wide range of tools needed by a job seeker with a background in astronomy. There are increasingly fewer job opportunities in the academic areas. Today, astronomers need placement skills and career information to compete strongly in a more diversified jobs arena. The workshop will offer practical training on preparing to enter the job market. Topics covered include resume and letter writing as well as how to prepare for an interview. Advice is given on resources for jobs in astronomy, statistics of employment and education, and networking strategies. Workshop training also deals with a diverse range of career paths for astronomers. The workshop will consist of an two approximately three-hour sessions. The first (1-4pm) will be on the placement tools and job-search skills described above. The second session will be for those who would like to stay and receive personalized information on individual resumes, job search problems, and interview questions and practice. The individual appointments with Ed Goldin and Susan Stringer that will take place during the second session (6-9pm) will be arranged on-site during the first session. A career development and job preparation manual "Preparing Physicists for Work" will be on sale at the workshop for \\9.00. TOPICS FOR DISCUSSION: How to prepare an effective resume How to research prospective employers Interviewing skills Networking to uncover employment Job prospects present and future Traditional and non-traditional positions for astronomers This workshop will be presented by Ed Goldin and Susan Stringer of the American Institute of Physics. The cost of the workshop is \\15.00 which includes a packet of resource materials supporting the workshop presentation. Please send your request for attendance by 8 May 1998 to the Executive Office along with a check, payable to the AAS, for the fee. Credit cards will not be

  15. Smart Grid Demonstration Project

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Craig [National Rural Electric Cooperative Association, Arlington, VA (United States); Carroll, Paul [National Rural Electric Cooperative Association, Arlington, VA (United States); Bell, Abigail [National Rural Electric Cooperative Association, Arlington, VA (United States)

    2015-03-11

    The National Rural Electric Cooperative Association (NRECA) organized the NRECA-U.S. Department of Energy (DOE) Smart Grid Demonstration Project (DE-OE0000222) to install and study a broad range of advanced smart grid technologies in a demonstration that spanned 23 electric cooperatives in 12 states. More than 205,444 pieces of electronic equipment and more than 100,000 minor items (bracket, labels, mounting hardware, fiber optic cable, etc.) were installed to upgrade and enhance the efficiency, reliability, and resiliency of the power networks at the participating co-ops. The objective of this project was to build a path for other electric utilities, and particularly electrical cooperatives, to adopt emerging smart grid technology when it can improve utility operations, thus advancing the co-ops’ familiarity and comfort with such technology. Specifically, the project executed multiple subprojects employing a range of emerging smart grid technologies to test their cost-effectiveness and, where the technology demonstrated value, provided case studies that will enable other electric utilities—particularly electric cooperatives— to use these technologies. NRECA structured the project according to the following three areas: Demonstration of smart grid technology; Advancement of standards to enable the interoperability of components; and Improvement of grid cyber security. We termed these three areas Technology Deployment Study, Interoperability, and Cyber Security. Although the deployment of technology and studying the demonstration projects at coops accounted for the largest portion of the project budget by far, we see our accomplishments in each of the areas as critical to advancing the smart grid. All project deliverables have been published. Technology Deployment Study: The deliverable was a set of 11 single-topic technical reports in areas related to the listed technologies. Each of these reports has already been submitted to DOE, distributed to co-ops, and

  16. PV-hybrid and mini-grid

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-07-01

    ) Optimization of a wind/diesel hybrid configuration in a remote grid with battery implementation: Case study of Melinka Island; (23) Provisional acceptance of installations and online data submission of PV and hybrid kits in remote areas of Latin-America under the EC's EURO-SOLAR programme; (24) Experience of the Canary Islands in the development of insular 100 % RES systems and micro-grids; (25) Assessment of photovoltaic hybrid power systems in the United States; (26) Solar hybrid school project in East Malaysia; (27) Eigg Island - Electrification of a British Island by a unique PV wind hydro diesel hybrid system; (28) A pragmatic performance reporting approach for describing PV hybrid systems within mini-grids: Work in progress from IEA's PVPS Task 11 Act. 31; (29) Hybrid renewable energy systems for the supply of services in rural settlements of Mediterranean partner countries. The HYRESS project - The case study of the hybrid system - Micro grid in Egypt. Beside these lectures, the following poster contributions were presented: (1) Performance of conventional MPPT techniques in the presence of partial shielding; (2) Photovoltaic and thermal collector (PV/T) hybrid system's performance analysis under the mild climate conditions of Izmir City; (3) Influential parameters on a building integrated hybrid PVT concentrator; (4) The solution to combine and manage renewable energies in hybrid applications and mini-grids; (5) Stabilization of distribution networks with PV and vanadium redox-battery backup systems - Simulation and first experiences; (6) Control, monitoring and data acquisition architecture design for clean production of hydrogen from mini-wind energy; (7) Remote Telecom System including photovoltaic energy and H{sub 2} production by electrolysis; (8) Effective combination of solar and wind energy systems; (9) Standardisation of distributed grid support - An analogous approach for the smart grid; (10) Optimizing energy management of decentralized

  17. Job satisfaction of nurses and identifying factors of job satisfaction in Slovenian Hospitals

    Science.gov (United States)

    Lorber, Mateja; Skela Savič, Brigita

    2012-01-01

    Aim To determine the level of job satisfaction of nursing professionals in Slovenian hospitals and factors influencing job satisfaction in nursing. Methods The study included 4 hospitals selected from the hospital list comprising 26 hospitals in Slovenia. The employees of these hospitals represent 29.8% and 509 employees included in the study represent 6% of all employees in nursing in Slovenian hospitals. One structured survey questionnaire was administered to the leaders and the other to employees, both consisting 154 items evaluated on a 5 point Likert-type scale. We examined the correlation between independent variables (age, number of years of employment, behavior of leaders, personal characteristics of leaders, and managerial competencies of leaders) and the dependent variable (job satisfaction – satisfaction with the work, coworkers, management, pay, etc) by applying correlation analysis and multivariate regression analysis. In addition, factor analysis was used to establish characteristic components of the variables measured. Results We found a medium level of job satisfaction in both leaders (3.49 ± 0.5) and employees (3.19 ± 0.6), however, there was a significant difference between their estimates (t = 3.237; P = Job satisfaction was explained by age (P job satisfaction variance. Conclusion Satisfied employees play a crucial role in an organization’s success, so health care organizations must be aware of the importance of employees’ job satisfaction. It is recommended to monitor employees’ job satisfaction levels on an annual basis. PMID:22661140

  18. Job satisfaction of nurses and identifying factors of job satisfaction in Slovenian Hospitals.

    Science.gov (United States)

    Lorber, Mateja; Skela Savič, Brigita

    2012-06-01

    To determine the level of job satisfaction of nursing professionals in Slovenian hospitals and factors influencing job satisfaction in nursing. The study included 4 hospitals selected from the hospital list comprising 26 hospitals in Slovenia. The employees of these hospitals represent 29.8% and 509 employees included in the study represent 6% of all employees in nursing in Slovenian hospitals. One structured survey questionnaire was administered to the leaders and the other to employees, both consisting 154 items evaluated on a 5 point Likert-type scale. We examined the correlation between independent variables (age, number of years of employment, behavior of leaders, personal characteristics of leaders, and managerial competencies of leaders) and the dependent variable (job satisfaction - satisfaction with the work, coworkers, management, pay, etc) by applying correlation analysis and multivariate regression analysis. In addition, factor analysis was used to establish characteristic components of the variables measured. We found a medium level of job satisfaction in both leaders (3.49±0.5) and employees (3.19±0.6), however, there was a significant difference between their estimates (t=3.237; P=lt;0.001). Job satisfaction was explained by age (Plt;0.05; β=0.091), years of employment (Plt;0.05; β=0.193), personal characteristics of leaders (Plt;0.001; β=0.158), and managerial competencies of leaders (Plt;0.000; β=0.634) in 46% of cases. The factor analysis yielded four factors explaining 64% of the total job satisfaction variance. Satisfied employees play a crucial role in an organization's success, so health care organizations must be aware of the importance of employees' job satisfaction. It is recommended to monitor employees' job satisfaction levels on an annual basis.

  19. Job insecurity and health.

    Science.gov (United States)

    McDonough, P

    2000-01-01

    As employers respond to new competitive pressures of global capitalism through layoffs and the casualization of labor, job insecurity affects a growing number of workers. It appears to harm mental health, but less is known about its effects on physical health and health behaviors and the mechanisms through which it may act. The prevailing individual-centered conceptualization of job insecurity as the perception of a threat to job continuity precludes systematic investigation of the social patterning of its health effects. Analysis of data from a 1994 Canadian national probability sample of adults determined that high levels of job insecurity lowered self-rated health and increased distress and the use of medications, but had no impact on heavy drinking. The findings support one possible mechanism of action whereby job insecurity reduces feelings of control over one's environment and opportunities for positive self-evaluation; these psychological experiences, in turn, have deleterious health consequences. There is little evidence of social patterning of this relationship by gender, education, household income, age, marital status, and social support at work.

  20. Instant Kendo UI grid

    CERN Document Server

    Lamar, James R

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. This is a Packt Instant How-to guide, which provides concise and clear recipes for working with tabular data with Kendo Grids.This book is for anyone with some basic HTML, CSS, and JavaScript experience. Intermediate and advanced users will find several helpful examples as well. Whether you are predominantly a designer or a developer, this book will work for you.

  1. Gridded Ionization Chamber

    International Nuclear Information System (INIS)

    Manero Amoros, F.

    1962-01-01

    In the present paper the working principles of a gridded ionization chamber are given, and all the different factors that determine its resolution power are analyzed in detail. One of these devices, built in the Physics Division of the JEN and designed specially for use in measurements of alpha spectroscopy, is described. finally the main applications, in which the chamber can be used, are shown. (Author) 17 refs

  2. The Grid PC farm

    CERN Document Server

    Maximilien Brice

    2006-01-01

    Housed in the CERN Computer Centre, these banks of computers process and store data produced on the CERN systems. When the LHC starts operation in 2008, it will produce enough data every year to fill a stack of CDs 20 km tall. To handle this huge amount of data, CERN has also developed the Grid, allowing processing power to be shared between computer centres around the world.

  3. Grid computing for electromagnetics

    CERN Document Server

    Tarricone, Luciano

    2004-01-01

    Today, more and more practitioners, researchers, and students are utilizing the power and efficiency of grid computing for their increasingly complex electromagnetics applications. This cutting-edge book offers you the practical and comprehensive guidance you need to use this new approach to supercomputing for your challenging projects. Supported with over 110 illustrations, the book clearly describes a high-performance, low-cost method to solving huge numerical electromagnetics problems.

  4. Automated agents for management and control of the ALICE Computing Grid

    International Nuclear Information System (INIS)

    Grigoras, C; Betev, L; Carminati, F; Legrand, I; Voicu, R

    2010-01-01

    A complex software environment such as the ALICE Computing Grid infrastructure requires permanent control and management for the large set of services involved. Automating control procedures reduces the human interaction with the various components of the system and yields better availability of the overall system. In this paper we will present how we used the MonALISA framework to gather, store and display the relevant metrics in the entire system from central and remote site services. We will also show the automatic local and global procedures that are triggered by the monitored values. Decision-taking agents are used to restart remote services, alert the operators in case of problems that cannot be automatically solved, submit production jobs, replicate and analyze raw data, resource load-balance and other control mechanisms that optimize the overall work flow and simplify day-to-day operations. Synthetic graphical views for all operational parameters, correlations, state of services and applications as well as the full history of all monitoring metrics are available for the ent ire system that now encompasses 85 sites all over the world, mo re than 14000 CPU cores and 10PB of storage.

  5. Grid Voltage Modulated Control of Grid-Connected Voltage Source Inverters under Unbalanced Grid Conditions

    DEFF Research Database (Denmark)

    Li, Mingshen; Gui, Yonghao; Quintero, Juan Carlos Vasquez

    2017-01-01

    In this paper, an improved grid voltage modulated control (GVM) with power compensation is proposed for grid-connected voltage inverters when the grid voltage is unbalanced. The objective of the proposed control is to remove the power ripple and to improve current quality. Three power compensation...

  6. Resilient Grid Operational Strategies

    Energy Technology Data Exchange (ETDEWEB)

    Pasqualini, Donatella [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-01

    Extreme weather-related disturbances, such as hurricanes, are a leading cause of grid outages historically. Although physical asset hardening is perhaps the most common way to mitigate the impacts of severe weather, operational strategies may be deployed to limit the extent of societal and economic losses associated with weather-related physical damage.1 The purpose of this study is to examine bulk power-system operational strategies that can be deployed to mitigate the impact of severe weather disruptions caused by hurricanes, thereby increasing grid resilience to maintain continuity of critical infrastructure during extreme weather. To estimate the impacts of resilient grid operational strategies, Los Alamos National Laboratory (LANL) developed a framework for hurricane probabilistic risk analysis (PRA). The probabilistic nature of this framework allows us to estimate the probability distribution of likely impacts, as opposed to the worst-case impacts. The project scope does not include strategies that are not operations related, such as transmission system hardening (e.g., undergrounding, transmission tower reinforcement and substation flood protection) and solutions in the distribution network.

  7. Grids, Clouds, and Virtualization

    Science.gov (United States)

    Cafaro, Massimo; Aloisio, Giovanni

    This chapter introduces and puts in context Grids, Clouds, and Virtualization. Grids promised to deliver computing power on demand. However, despite a decade of active research, no viable commercial grid computing provider has emerged. On the other hand, it is widely believed - especially in the Business World - that HPC will eventually become a commodity. Just as some commercial consumers of electricity have mission requirements that necessitate they generate their own power, some consumers of computational resources will continue to need to provision their own supercomputers. Clouds are a recent business-oriented development with the potential to render this eventually as rare as organizations that generate their own electricity today, even among institutions who currently consider themselves the unassailable elite of the HPC business. Finally, Virtualization is one of the key technologies enabling many different Clouds. We begin with a brief history in order to put them in context, and recall the basic principles and concepts underlying and clearly differentiating them. A thorough overview and survey of existing technologies provides the basis to delve into details as the reader progresses through the book.

  8. Progress in Grid Generation: From Chimera to DRAGON Grids

    Science.gov (United States)

    Liou, Meng-Sing; Kao, Kai-Hsiung

    1994-01-01

    Hybrid grids, composed of structured and unstructured grids, combines the best features of both. The chimera method is a major stepstone toward a hybrid grid from which the present approach is evolved. The chimera grid composes a set of overlapped structured grids which are independently generated and body-fitted, yielding a high quality grid readily accessible for efficient solution schemes. The chimera method has been shown to be efficient to generate a grid about complex geometries and has been demonstrated to deliver accurate aerodynamic prediction of complex flows. While its geometrical flexibility is attractive, interpolation of data in the overlapped regions - which in today's practice in 3D is done in a nonconservative fashion, is not. In the present paper we propose a hybrid grid scheme that maximizes the advantages of the chimera scheme and adapts the strengths of the unstructured grid while at the same time keeps its weaknesses minimal. Like the chimera method, we first divide up the physical domain by a set of structured body-fitted grids which are separately generated and overlaid throughout a complex configuration. To eliminate any pure data manipulation which does not necessarily follow governing equations, we use non-structured grids only to directly replace the region of the arbitrarily overlapped grids. This new adaptation to the chimera thinking is coined the DRAGON grid. The nonstructured grid region sandwiched between the structured grids is limited in size, resulting in only a small increase in memory and computational effort. The DRAGON method has three important advantages: (1) preserving strengths of the chimera grid; (2) eliminating difficulties sometimes encountered in the chimera scheme, such as the orphan points and bad quality of interpolation stencils; and (3) making grid communication in a fully conservative and consistent manner insofar as the governing equations are concerned. To demonstrate its use, the governing equations are

  9. Talking about the job

    DEFF Research Database (Denmark)

    Holmgreen, Lise-Lotte; Strunck, Jeanne

    2016-01-01

    Talking about the job: The influence of management on leadership discourses Over the past decades, much research has been carried out to detail and analyse the uneven distribution of men and women in management positions (Acker 1990; Billing and Alvesson 2000; Österlind and Haake 2010). In Denmark......, this has been visible in banks and building societies where men would occupy the vast majority of senior positions, and women would be predominant in lower-ranking jobs, making it extremely difficult to climb the career ladder (Ellehave and Søndergaard 2006; Holmgreen 2009; Strunck 2013). One...... of the organisation, embedded social structures of inequality may pose a significant threat to the realisation of this goal. References Acker, J. 1990. Hierarchies, jobs, bodies: A theory of gendered organizations. Gender and Society 4(2): 139-58. Benschop, Y. and Dooreward, H. 1998. Covered by equality: The gender...

  10. GrEMBOSS: EMBOSS over the EELA GRID

    International Nuclear Information System (INIS)

    Bonavides-Martinez, C.; Murrieta-Leon, E.; Verleyen, J.; Zayas-Lagunas, R.; Hernandez-Alvarez, A.; Rodriguez-Bahena, R.; Valverde, J. R.; Branger, P. A.; Sarachu, M.

    2007-01-01

    With the growth of genome databases and the implied complexity for processing such information within bioinformatics research, there is a need for computing power and massive storage facilities which can be provided by Grid infrastructures. EMBOSS is a free Open Source sequence analysis package specially developed for the needs of the bioinformatics and molecular biology user community. This work describes the deployment of EMBOSS over the EELA and EGEE Grids, both gLite middle ware-based infrastructures. This work is focused on rewriting the I/O EMBOSS libraries (AJAX) to use the GFAL from the LCG/EGEE middle ware. This library allows the use of files registered on the catalog service which are contained in the storage elements of a Grid. Submitting a job into a Grid is not an intuitive task. This work also describes an ad hoc mechanism to allow bioinformaticians to concentrate on the EMBOSS command, instead of acquiring advanced knowledge about Grid usage. The results obtained so far demonstrate the functionality of GrEMBOSS, and represent an efficient and viable alternative for gridifying other bioinformatics applications. (Author)

  11. ARC Cache: A solution for lightweight Grid sites in ATLAS

    CERN Document Server

    Garonne, Vincent; The ATLAS collaboration

    2016-01-01

    Many Grid sites have the need to reduce operational manpower, and running a storage element consumes a large amount of effort. In addition, setting up a new Grid site including a storage element involves a steep learning curve and large investment of time. For these reasons so-called storage-less sites are becoming more popular as a way to provide Grid computing resources with less operational overhead. ARC CE is a widely-used and mature Grid middleware which was designed from the start to be used on sites with no persistent storage element. Instead, it maintains a local self-managing cache of data which retains popular data for future jobs. As the cache is simply an area on a local posix shared filesystem with no external-facing service, it requires no extra maintenance. The cache can be scaled up as required by increasing the size of the filesystem or adding new filesystems. This paper describes how ARC CE and its cache are an ideal solution for lightweight Grid sites in the ATLAS experiment, and the integr...

  12. Streamline integration as a method for two-dimensional elliptic grid generation

    Energy Technology Data Exchange (ETDEWEB)

    Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at [Institute for Ion Physics and Applied Physics, Universität Innsbruck, A-6020 Innsbruck (Austria); Held, M. [Institute for Ion Physics and Applied Physics, Universität Innsbruck, A-6020 Innsbruck (Austria); Einkemmer, L. [Numerical Analysis group, Universität Innsbruck, A-6020 Innsbruck (Austria)

    2017-07-01

    We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metrics we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.

  13. Smart Grid Risk Management

    Science.gov (United States)

    Abad Lopez, Carlos Adrian

    Current electricity infrastructure is being stressed from several directions -- high demand, unreliable supply, extreme weather conditions, accidents, among others. Infrastructure planners have, traditionally, focused on only the cost of the system; today, resilience and sustainability are increasingly becoming more important. In this dissertation, we develop computational tools for efficiently managing electricity resources to help create a more reliable and sustainable electrical grid. The tools we present in this work will help electric utilities coordinate demand to allow the smooth and large scale integration of renewable sources of energy into traditional grids, as well as provide infrastructure planners and operators in developing countries a framework for making informed planning and control decisions in the presence of uncertainty. Demand-side management is considered as the most viable solution for maintaining grid stability as generation from intermittent renewable sources increases. Demand-side management, particularly demand response (DR) programs that attempt to alter the energy consumption of customers either by using price-based incentives or up-front power interruption contracts, is more cost-effective and sustainable in addressing short-term supply-demand imbalances when compared with the alternative that involves increasing fossil fuel-based fast spinning reserves. An essential step in compensating participating customers and benchmarking the effectiveness of DR programs is to be able to independently detect the load reduction from observed meter data. Electric utilities implementing automated DR programs through direct load control switches are also interested in detecting the reduction in demand to efficiently pinpoint non-functioning devices to reduce maintenance costs. We develop sparse optimization methods for detecting a small change in the demand for electricity of a customer in response to a price change or signal from the utility

  14. Proverbs, Ecclesiastes, Job

    DEFF Research Database (Denmark)

    Nielsen, Kirsten

    2007-01-01

    The article consists of a literary reading of three Old Testament wisdom books, Proverbs, Ecclesiastes and the Book of Job. The reading strategies employed are analysis of imagery and intertextual reading. The articles concludes in a presenatation of images of God in wisdom literature.......The article consists of a literary reading of three Old Testament wisdom books, Proverbs, Ecclesiastes and the Book of Job. The reading strategies employed are analysis of imagery and intertextual reading. The articles concludes in a presenatation of images of God in wisdom literature....

  15. Job Displacement and Crime

    DEFF Research Database (Denmark)

    Bennett, Patrick; Ouazad, Amine

    This paper matches a comprehensive Danish employer-employee data set with individual crime information (timing of offenses, charges, convictions, and prison terms by crime type) to estimate the impact of job displacement on an individual’s propensity to commit crime. We focus on displaced...... no significantly increasing trend prior to displacement; and the crime rate of workers who will be displaced is not significantly higher than the crime rate of workers who will not be displaced. In contrast, displaced workers’ probability to commit any crime increases by 0.52 percentage points in the year of job...

  16. Branding McJobs

    DEFF Research Database (Denmark)

    Noppeney, Claus; Endrissat, Nada; Kärreman, Dan

    Traditionally, employer branding has been considered relevant for knowledge intensive firms that compete in a ‘war for talent’. However, the continuous rise in service sector jobs and the negative image of these so-called McJobs has motivated a trend in rebranding service work. Building on critical...... oriented branding literature, our contribution to this stream of research is twofold: We provide an empirical account of employer branding of a grocery chain, which has repeatedly been voted among the ‘100 best companies to work for’. Second, we outline the role of symbolic compensation that employees...... of employer branding....

  17. Job satisfaction and intention to quit the job

    DEFF Research Database (Denmark)

    Suadicani, P; Bonde, J P; Olesen, K

    2013-01-01

    Negative psychosocial work conditions may influence the motivation of employees to adhere to their job.......Negative psychosocial work conditions may influence the motivation of employees to adhere to their job....

  18. Squid – a simple bioinformatics grid

    Directory of Open Access Journals (Sweden)

    de Miranda Antonio B

    2005-08-01

    Full Text Available Abstract Background BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Results Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Conclusion Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.

  19. The research of the malfunction diagnosis and predictions system in the smart electric grid

    Science.gov (United States)

    Wang, Yaqing; Zhang, Guoxing; Xu, Hongbing

    2017-03-01

    The Chinese smart electric grid constriction has been increasing with the technology development. However, the monitoring equipment and background system which should play important roles did not work as intended and restrict to the efficacy of the smart grid. In this essay, it has researched an intelligentized malfunction diagnosis and predictions system which could work with the existed monitoring equipment to function as whole energy monitoring, common malfunction diagnosis, faulted proactive judgment and automatically elimination.

  20. Physician job satisfaction related to actual and preferred job size

    OpenAIRE

    Schmit Jongbloed, Lodewijk J.; Cohen-Schotanus, Janke; Borleffs, Jan C. C.; Stewart, Roy E.; Schonrock-Adema, Johanna

    2017-01-01

    Background: Job satisfaction is essential for physicians' well-being and patient care. The work ethic of long days and hard work that has been advocated for decades is acknowledged as a threat for physicians' job satisfaction, well-being, and patient safety. Our aim was to determine the actual and preferred job size of physicians and to investigate how these and the differences between them influence physicians' job satisfaction. Method: Data were retrieved from a larger, longitudinal study a...